Wednesday, December 21, 2011

netcat as WebServer

I needed to have a (dummy) web server which gives only 201 CREATED response, for testing purpose. Instead of using real web server, I found a simpler way to achieve that. Using netcat like the following is enough for me to do some testing on my programs.

$ nc -C -l 8080 -q 1 << EOF
HTTP/1.0 201 CREATED
Content-Type: text/html
EOF

Saturday, November 26, 2011

Interacting with Django Tastypie API Using Python Librarys

If you already have some django web-app, consider extending its functionality with django-tastypie. It has pretty decent docs as well. With django-tastypie you can create RESTful Web Service. I am not going to talk about django-tastypie in this post, instead I would like to share how to interact with django-tastypie api using python's urllib and urllib2 libraries.

The following codes gives some idea about how to get data using the API, with ApiKeyAuthentication method.

import urllib, urllib2

api_url = 'http://localhost:8080/api/app1/model1/?username=apiuser1&api_key=apikey_for_apiuser1'
http_req = urllib2.Request(api_url, headers={'ACCEPT': 'application/json, text/html'})
try:
    resp = urllib2.urlopen(http_req)
    contents = resp.read()
    print contents
except urllib2.HTTPError, error:
    print he.getcode()


The codes below gives some idea about how to create data using the API, with ApiKeyAuthentication method.


import urllib, urllib2
import simplejson as json


api_url = 'http://localhost:8080/api/app1/model1/?username=apiuser1&api_key=apikey_for_apiuser1'
json_post = json.dumps({"model_name": "TEST13"})
http_req = urllib2.Request(api_url, json_post, {'Content-Type': 'application/json', 'ACCEPT': 'application/json, text/html'})
try:
response = urllib2.urlopen(http_req)
print response.read()
print response.getcode()
print response.info()
print response.msg
except urllib2.HTTPError as he:
print he.getcode()
print he.info()
print he.msg
except urllib2.URLError as ue:
print ue

Both examples using json data format. Remember to set the appropriate Content-Type and ACCEPT header data if using different data format.
If it is necessary to use https secure connection and proxy, refer to this post to get more information.


Distributing Large Files - Implementation Part 1

This post is related to my previous post about distributing large files http://murzal-arsya.blogspot.com/2011/11/distributing-large-files.html.

Finally I implemented my distribution strategy as I explained in my previous post. Let me give a little bit info about the infrastructure.

1 Server:
Brand/Type: IBM System x3650 M3
OS: Centos 6
Service: Samba
NIC: 2 Gigabit

1 Switch: HP Procurve 1810 24g (Gigabit)

10 Clients:
Brand/Type: HP Pavilion HPE
OS: Windows 7
Apps: QGIS
NIC: 1 Gigabit


I have samba configured as shared storage. Within the share folder I put the large image files to be distributed to the clients. My strategy is creating a chain distribution, from the Server to PC-01, PC-01 to PC-02, and so on, until PC-10. I choose robocopy for its simplicity and it does the job well.
So, I created a batch file

Since I need to access the shared folder, I need to make sure that the connection could be established using correct user for authentication. In order to do that, before I run robocopy, I use net use command for authenticating.
>net use \\SERVER\share_images /USER:user1 password1
then I could execute robocopy with something like the following:
>robocopy /E /PURGE /COPY:DAT \\SERVER\share_images C:\images

The most ideal way to trigger the synchronization is when there is change in each share folder, and queuing the sync processes when there are multiple changes detected. It is beyond the scope of this post.

Sunday, November 20, 2011

Python HTTPS POST Data Over Proxy

Posting HTTPS POST data over proxy is pretty simple in python. Everything is done mostly with urllib2. The only thing that is done outside urllib2 is when encoding the POST data, which uses urllib.urlencode.
Here is an example:


import urllib, urllib2

PROXY = urllib2.ProxyHandler({'https': 'http://your_proxy:your_proxy_port/'})
PROXY_OPENER = urllib2.build_opener(PROXY)
urllib2.install_opener(PROXY_OPENER)

URL = 'https://www.post-test-site.com/post.php'
POST = {'data1' : 'Your Data1', 'data2' : 'Your Data2'}

POST_DATA = urllib.urlencode(POST)
HTTPS_REQ = urllib2.Request(URL, POST_DATA)
RESP = urllib2.urlopen(HTTPS_REQ)
OUTPUT = RESP.read()

print OUTPUT

When your proxy needs authentication, then you need to use:
urllib2.HTTPBasicAuthHandler and/or urllib2.HTTPPasswordMgrWithDefaultRealm

Tuesday, November 15, 2011

Configure DHCPd to Work on Certain NIC(s)

Your server has more than one network interfaces and you need DHCP server to associated with only one of them.

Here is how you configure it on Ubuntu:

# File: /etc/default/dhcp3-server
INTERFACES="eth0"
or
INTERFACES="eth0 eth1"

Onn Redhat/Fedora/Centos:
# File: /etc/sysconfig/dhcpd
DHCPDARGS=eth1
or
DHCPDARGS="eth0 eth1"

Reference:
http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-dhcp-configuring-server.html

Saturday, November 12, 2011

Distributing Large Files

One of the requirements of my new project is to be able to distribute large files to 10-15 clients as fast as possible. Fortunately, all (server and clients) NICs supports Gigabit connectivity.
I tested it first simply by copying large data from server to many clients simultaneously. Of course, it was a disaster. While theoretically the copying speed would decrease proportional to the number of clients, in reality the network connection between clients and server, and also between clients got disrupted. Network was down for sometimes. I did not make any further investigation on what was happening, all I know is this is far from the desired speed even if it was going well.
What's next? I gather some info's on how to distribute large files over LAN. Some suggest bittorrent, parallel filesystem, multicast copy, (win) mesh, and some other way. My concern is that I have to use a solution which is as simple as possible, since there would not be any expert IT guy around in the future.
While bittorrent sounds pretty promising, I had to leave out this option. Not only because of the capability of the human resources that will maintain the system, also it would take some times to prepare the large file to be distributed by bittorrent. Another thing is that small torrent packets could really saturate the network.

About parallel filesystem, I have to ditch this one too, because this would be more complicated and confusing for the basic admin guy. Also I need to mention that all clients are Windows 7, not really sure there are stable parallel filesystem support

Multicast copy should be interesting, but could not find any apps for Windows 7.

So, I figured that I need to make my own simple solution. The goal here is to distribute large files, around 1-5 GigaBytes each, without overloading the connection between source (server) and target (pc).

I simply came up with creating some kind of chain distribution. Transfer file from Server to PC1, and from PC1 to PC2, and so on. Doing this, I could use all of the available incoming gigabit bandwidth as well as the outgoing one. I tested it and worked quite well. Of course there will be a problem if one of the PC is down, then the distribution could not be continued to the rest of the PCs after that. So, rerouting must be done either manually or automatically.

As for the tool to copy the data, one can use simple copy, robocopy, synctoy, and other mechanism. I am planning to use command line robocopy.

I also plan to use load balancing technique using both NICs on the server. So, I can create 2 chains within the network, and perform the transfer simultaneously to those chains.

Thursday, October 20, 2011

Permanently Set Network Interface Speed, Duplex or Auto Negotiate Settings on Linux


On RedHat (based) Linux:

Edit /etc/sysconfig/network-scripts/ifcfg-eth0 file for eth0 interface.
On my Fedora15, the file looks like the following:
/etc/sysconfig/network-scripts/ifcfg-Auto_wlan

So, open the corresponding file according to your need, then append the following line:
ETHTOOL_OPTS="speed 100 duplex full autoneg off"

Save the file, and restart network interface by executing the following command:
# /etc/init.d/network restart

The previous example is for setting the NIC to 100Mbps, full duplex and no auto negotiation.


If you want 1000Mbs use the following configuration:
ETHTOOL_OPTS="speed 1000 duplex full autoneg off"
or
ETHTOOL_OPTS="speed 1000 duplex full autoneg on"



For Ubuntu and perhaps Debian and other Debian based Linux, edit /etc/network/interfaces, and add pre-up statment, so it will look like something like the following:

auto eth0
iface eth0 inet static
pre-up /usr/sbin/ethtool -s eth0 speed 1000 duplex full

Refer to this link for more information.

For Ubuntu:
Edit /etc/network/interfaces and add the following line:
pre-up /usr/sbin/ethtool -s $IFACE autoneg off 100 duplex full

Tuesday, October 18, 2011

IBM System x3650 M3 Disk Configuration

I just finished working with IBM System x3650 M3. In the beginning, I had trouble with both Ubuntu Server 10.04.3 LTS and Centos 6 not detecting internal harddisks. Doing a little bit search, some informed that the problem is due to the driver, but that is not the case, since Ubuntu Server 10.04.3 LTS already have it.
Then I tried with Centos 6 and faced the same problem, no harddisk detected. Fortunately, I found some info in a forum mention about one need to configure the harddisk in the BIOS first. Aha. This is actually where the issue exist.

In order to configure the harddisks in the BIOS, just boot up the server and wait until a screen mention about Megaraid WebBIOS Configuration, press Ctrl-H.
Then, configure the harddisks as Array RAID. After that, just use Configuration Wizard to set them as as many Disk Group as needed. At this point, Centos 6 could detect the harddisk.

Find official guide from IBM, Installation and User's Guide - IBM ServeRAID-M Software (WebBIOS, MegaRAID Storage Manager, and MegaCLI), here.

Thursday, October 6, 2011

Highcharts Month Year X Axis Format

Highcharts is really awesome. If you need to neat looking charts, this is the answer.

I faced a bit problem with setting the X Axis date format to stick with Month-Year format. That is because Highcharts automatically adjust the best possible date range. To maintain the date range to monthly (Month-Year format), is by using tick interval configuration.
To get monthly date range, set the tick interval to approximately one month, so the value is: 31* 24 * 3600 * 1000.

If Highcharts' Licensing does not fit you, then take a look at jqPlot. Not as fancy as Highcharts, but in my opinion it's the best for totally free js charting solution.




Django ORM Duplicate Group By Condition

I noticed that when using group by with Django ORM, the group by conditions are duplicated. Then I found more information about it in here. Fortunately there is a patch, and the diff file.

The patch should be applied to compiler.py module. On my Ubuntu 10.04 system it is located:
/usr/local/lib/python2.6/dist-packages/Django-1.3.1-py2.6.egg/django/db/models/sql/compiler.py


Friday, September 16, 2011

Dlink DNS320 NAS as SVN Server

Continuing my previous post about Dlink DNS320 NAS, I would like to share about making this NAS as SVN Server. I am still amazed by this NAS and how handy and quite powerful it becomes, because I have Debian Squeeze installed and running on it. Once I have Debian Squeeze installed I could installed anything I need from the repositories. Please read my previous post on how to install it on this NAS.

Here is how I setup this NAS to become svn server.

Create svn repos directory:


# apt-get update
# apt-get install subversion


# mkdir -p /var/svn/

# svnadmin create /var/svn/projectalpha


Add svn user and group:
# adduser --system --group --shell /usr/sbin/nologin --disabled-login svn

Set the appropriate ownership for the repository directory:
# chown -R svn.svn /var/svn

Add exisiting user to svn group so they will have access to the repo directory:
# useradd -G svn arsya

Now set up an ssh server, clients will connect to this machine using ssh:
Set up ssh server if does not exist yet:
# apt-get install openssh-server

Perform the following command to test via svn+ssh protocol:
$ svn co svn+ssh://username@hostname/var/svn/projectalpha

Now, what if you have your ssh server running on custom port? How can you tell your svn client to use it?

If you are using subversion, the one the you (apt-get) installed earlier, simply update your subversion configuration file in ~/.subversion/config:
[tunnels]
sshtunnel = ssh -p


If you are using tortoisesvn client in windows, then you need to go to TortoiseSVN->Settings->Network and set the SSH client to:
C:\Program Files\TortoiseSVN\bin\TortoisePlink.exe -P

Note that I do not need to install apache2 web server, since svn+ssh is enough for me to use. Also it would save some memory usage in this limited memory NAS.

Thursday, September 15, 2011

DLink DNS320 NAS with Extended Services

DLink DNS320 NAS is quite awesome, because it is quite affordable and one can hack it to make it as even more useful than what it already is.

This NAS sports a Marvell Kirkwood processor with 128MB RAM, here is technical details of it:

CPU800 MHz Marvell 88F6281 (Kirkwood)
RAM128 MB
USB1 USB2.0 Port (front)
LANMarvell 88E1118R-NNC1


To get more info please go here.

Looking at that spec, one can get tempted to figure out a way to turn it into not just a Storage Server. Fortunately there is a way to extend the functionality by using fun_plug. Fun_plug makes installing other services so easy. Here and here contains more detail about setting it up in DNS320. Please install fun_plug (ffp) before installing Debian Squeeze. Make sure you use this ffp script for DNS320, otherwise you will receive errors.

For me it is not enough, since I know that there is a way to install Debian Squeeze in it and chroot it automagically. Now this is amazing, because you have a full blown debian linux distro running inside your NAS.
What you need is to download Debian Squeeze here. Then store it in the root of your nas, normally called Volume_1. Follow the readme inside the archive.

Once Debian Squeeze installed, you can install ntp daemon, web server, svn server, download manager, and other stuff. The only limitation is the size of RAM.

Oh, I need to inform you that installing fun_plug and Debian Squeeze is totally safe and reversible, because no ROM Flashing involved. So, what are you waiting for?!




Django ORM DB Transactions to Speed Up Bulk Insert

Using Django means you are bounded to Django ORM. Of course one can use other DB ORM, but then loose many neat features in django, such as admin interface and model forms. Lots of tinkering around required when not using Django ORM.
I had a problem when inserting many data using Django ORM. The performance was unacceptably slow. Just to insert 30 data requires more than 10 seconds. How about inserting 50, 100 or more. That is because each time I called save method in the model, Django ORM will execute and commit it immediately, and resulted in performance issue.
Luckily there is a a way to speed up large number of data insert. Django provides a way to manage database transaction, and the way to use it is pretty simple. What we need is TransactionMiddleware. Autocommit is used by default by the transaction manager. In order to speed things up, we need to change it to commit only on succes, simply by using commit_on_succes decorator for the related view.

Now, you need to have TransactionMiddleware registered in settings.py, so it will look like the following:

MIDDLEWARE_CLASSES = (
...
    'django.middleware.transaction.TransactionMiddleware',
...
)


After that, you can use it in your views.py like the following:

from django.db import transaction

@transaction.commit_on_success
def viewfunc(request):
    # ...
    # this code executes inside a transaction
    # ...


By doing that, if the function returns successfully, then Django will commit all work done within the function at that point. If the function raises an exception, though, Django will roll back the transaction. All of those means faster performance.

Please refer to here for more details about django transaction management.

Friday, August 5, 2011

PS2 Keyboard Freeze

I faced a problem with random PS2 keyboard freeze on my ACER Aspire M3970, Ubuntu 10.10. Really frustrating, because the only way to fix this is by restarting the computer.
After doing a bit research, some people suggest to disable USB Legacy Support in the BIOS. I did it already and still monitoring the result.
Hopefully this would solve the random keyboard freeze issue.

Sunday, July 3, 2011

Virtual LTP1 for USB Printer

Another interesting day. I faced a typical trouble when running old legacy software on new hardware. This time the issue is with the printing. The legacy software requires lpt1 connection to do the printing, which the new hardware does not have.
This should be simple if there is a configuration menu for choosing the appropriate printer, unfortunately there is not.
Another thing that makes it a little bit complicated is, it uses raw ESC command, instead of using the printer driver to print.
In this case, somehow I need to make the legacy software sees and uses lpt1.

Now, the solution is to add the printer, which uses USB, so it will work correctly. Now, the trick is to use a generic text printer as the driver instead of the actual printer's driver. That solved the raw ESC command.
Nex step, share the printer. Then, redirect ltp1 to use the usb connected printed configured previously. To do that, go to command prompt and type the following command:
c:\> net use lpt1: "\\computer_ip\\name_of_the_shared_printer" /persistent:yes

At this point, the lpt1 is available for the legacy software to use.

Saturday, May 28, 2011

Dynamic Initial Value for Django Admin Inline Form

I am impressed with Django Admin Inline Form. Even more impressed by the simplicity to set the initial value for it.

One can easily set the initial value for Django Admin Add Form, via url get parameter. For example: http://localhost:8080/admin/myapp/mytable/?myfield_id=1

This will not work if myfield is within the inline form, instead of regular form. To do that, override admin.TabularInline formfield_for_foreignkey method.

Here is an example:

class MyTabInline(admin.TabularInline):
    model = models.MyModel
    extra = 1

    def formfield_for_foreignkey(self, db_field, request, **kwargs):
        if db_field.name == 'myfield':
            kwargs['initial'] = request.GET.get('myfield_id', '')
        return super(MyTabInline, self). 
                formfield_for_foreignkey(db_field, request, **kwargs)
Pretty simple.

Friday, May 27, 2011

Perform Extra Operations after Saving All Inlines in Django Admin

Django Admin provides quite solid foundation for performing CRUD operations. Djando Admin does most things already.
What I needed to do this time is to perform extra operations after saving all inlines in Django Admin. If I do not need to access request object, I could perform extra operations by overriding save method. The thing is I need to get current logged in user, that is exactly why I need to do it from admin.ModelAdmin.

First attempt is by overriding admin.ModelAdmin save_model method. Failed, due to after performing obj.save(), the inline data are not yet available.

What next? Fortunately, I found helpful information from this link.

From that, we could intercept right before preparing the response.
There are three methods which related with setting up response:
response_add, response_change, response_action.

Here is an example of overriding response_add method.
def response_add(self, request, new_object):
    # Perform extra operation after all inlines are saved
    return super(AdminSubClass, self).response_add(request, new_object)

There is a ticket with this issue.
Hopefully there will be even better way to perform such action.

Friday, May 20, 2011

Remove First N bytes From a (Binary) File

Situation:
You have a (binary) file, and you need to remove the first n bytes of data in it.

Solution:
Use dd. Read more about it, here.

Here is how to do it:
$ dd if=input_file ibs=1 skip=N of=output_file

N is the number of bytes to be removed.

example:
$ dd if=input_file.dat ibs=1 skip=3 obs=10M of=output_file.dat


Now, what if you need to remove the first N bytes and the last M bytes from a file?
$ dd if=input_file.dat ibs=1 skip=N count=X of=output_file.dat

To calculate X:
X = acutal_file_size_in_bytes - N - M


I recommend you to read through the man pages, and play around with ibs, obs, bs, count values and other parameters that might be useful.

Thursday, May 19, 2011

Python ValueError Bad Marshal Data

I encountered this problem when I copied my django project along with (cherrypy's) wsgiserver and started it.

To solve this issue, I needed to remove wsgiserver.pyc, and re-run the program. Just like that.

So, it is a good idea to clean all the *.pyc's, when moving/copying codes in a different computer, and let python recreated them all.

Serving Django with CherryPy wsgiserver

For my current project, I do not need heavy duty webserver. What I needed was a simple (python) webserver and cherrypy wsgiserver does the job.

To make things simple:
copy cherrypy/wsgiserver/__init__.py into the project folder and rename it as wsgiserver.py.

Next step is to create a wsgiserver startup script. It would be something like the following:

1:  import wsgiserver  
2:  import sys  
3:  import os  
4:  import django.core.handlers.wsgi  
5:  if __name__ == "__main__":  
6:    sys.path.append(os.path.realpath(os.path.dirname(__file__)))  # add django project absolute path  
7:    # Startup Django  
8:    os.environ['DJANGO_SETTINGS_MODULE'] = 'django_project_dir_name.settings'  
9:    server = wsgiserver.CherryPyWSGIServer(  
10:      ('127.0.0.1', 8080),  
11:      django.core.handlers.wsgi.WSGIHandler()  
12:    )  
13:    try:  
14:      server.start()  
15:    except KeyboardInterrupt:  
16:      print 'Stopping'  
17:      server.stop()  


Of course the performance is below apache, lighttpd, nginx, but when one does not need high performance and would like a simple (wsgi) webserver, this is one of the answers.

Thursday, May 5, 2011

TG2.1 + Eclipse + Pydev

Here's how you do it.

create launch_tg.py, in the same directory as development.ini, contains:
#################################
from paste.script.serve import ServeCommand
ServeCommand("serve").run(["development.ini"])
#################################

Start Eclipse.
File -> New -> Project -> Pydev -> Pydev Project
Project name: test
Click: "Please configure an interpreter in the related preferences before proceeding."
New -> Browse -> /your_virtualenv_dir/your_webapp_dir/bin/python -> Ok -> Ok
Ok -> Finish

Pydev Package Explorer -> right-click on 'test' -> Refresh
Run -> Debug Configurations -> Python Run (double click)
Main tab -> Project -> test
Main tab -> Main module -> ${workspace_loc:your_eclipse_project/src/your_webapp_dir/launch_tg.py}
Arguments tab -> Working directory -> ${workspace_loc:test/src/Test}
Click debug


Taken from: http://ping.fm/DnR7T

TG2.1 Trouble with Sqlite Database

If you use sqlite as the backend db for your tg2.1 web apps, you might occasionally get errors like the following:
ProgrammingError: (ProgrammingError) Cannot operate on a closed database

It turns out that the default sqlalchemy dbpool is not optimized for filebased db, like sqlite.

To solve this matter, simply change sqlalchemy dbpool to NullPool.

Make the modification in webapp/config/app_cfg.py:
#################################
from sqlalchemy.pool import NullPool
class PLAppConfig(AppConfig):
# override
def setup_sqlalchemy(self):
from sqlalchemy import engine_from_config
engine = engine_from_config(pylons_config, 'sqlalchemy.', poolclass=NullPool)
config['pylons.app_globals'].sa_engine = engine
# Pass the engine to initmodel, to be able to introspect tables
self.package.model.init_model(engine)

base_config = PLAppConfig()
#################################

All set, no more ProgrammingError.

TG2.1 Trouble with One to Many Relationship

Finally, I figured out what causes the 'ModelController' error in the TG2.1 Admin System Page. Thanks to Eclipse (Aptana Studio 3) and PyDev, I was able to debug and spot the cause of the trouble.
I was kind of surprised that Sprox, the module that is responsible for form-model binding, could not handle unique constraint in one to many relationship.
Due to this, all other kind of constraints on relationship will probably fail too. Perhaps UniqueConstraint also get effected.
So, mainly all other constraints that are not applied directly to the fields of the sqlalchemy models could not be handled properly by TG2.1 (sprox).

Since, unique constraint is very common in DB design, I really hope that this functionality could be handled not just in simple field but also in relationship and multi-columns unique contraint.

As for the 'elegant' solution, I have not yet find a way to do it. Perhaps, I could add validator in the relationship field/attribute (in sqlalchemy declarative model class), but no clue on how to implement it. Another thing is that even if I could do that, I am not sure it would work, since I am not even sure if relationship field would be registered to be checked by the framework.

It would be a good thing if in sprox the relationship field and the foreignkey field both are being checked as one.

As for now, I just need to do a work around to prevent adding duplicate one to many relationship.

Pretty impressed with TG2.1. This is by far the most flexible web framework that I have worked with. My suggestion to those who would use this framework is, be prepared to swim and dive in the source codes, as you definitely need to extend functionalities by overriding methods and sub-classing.

Monday, April 11, 2011

TurboGears 2.1 Admin System Plural Name

I really do not like the way TG2.1 Admin System force developer to use plural name. Because in my language we do not use the same grammar. Totally different, so it's a terrible idea to put 's's on every model/table name.

To solve this issue, there are two major area that we need to change. The first one is the genshi template, and the other one is the class that is responsible to parse the model/table REST urls.

Fortunately the issue resides on one class tgext.crud.controller.CrudRestController. There are some methods which use genshi templates located in tgext.admin.templates. So, you might want to copy the templates and make appropriate changes. Done with one area.

Next is to change the way CrudRestController uses the template and perform lookup for the model/table REST urls. The way to change the template is to change the value of @expose decorator.
Another thing to do is to override _lookup method.
Inside, you can see code like the following:
model_name = model_name[:-1]
The code above assumes that the model/table REST url is in plurals. No good. Don't like it at all. Simply remove that particular line of code and we're done.

TurboGears 2.1 Admin System Change Date Format and Other Values

I have been working a lot with TG2.1 Admin System lately. If you have worked with django, you know that there is automatic admin page generation. In TG2.1 this functionality is based on tgext.admin, tgext.crud, and sprox.

For those you already enjoying TG2.1 Admin System, perhaps you realized that the date format in the admin system is fixed to m-d-Y format. What if you want to change the format? Unfortunately the format is hardcoded inside the TableFiller. Its exact location is sprox.fillerbas.TableFiller.
When you checked the source code of that particular class, you will see that the date format is hardcoded in get_value(self, values=None, **kw) method. So, what you need to do is subclass TableFiller and override get_value method.
Now, after changing the date format, then I needed to format some numbers with grouping. So, I added some more logic to check if the data in 'value' is int/float/Decimal and format it properly.

Next step is to understand that TableFiller can be used in CrudRestControllerConfig (tgext.admin.config.CrudRestControllerConfig), under variable 'table_filler_type'. So, write the appropriate code to set TableFiller inside CrudRestControllerConfig.

CrudRestControllerConfig itself can be used in tgext.admin.tgadminconfig.TGAdminConfig / tgext.admin.config.AdminConfig.
So, put the CrudRestControllerConfig inside *AdminConfig, with the appropriate dbfield variable.

AdminConfig itself can be used as config_type parameter for tgext.admin.controller.AdminController.

Now, everything is set and ready to use.

I want to give you a little hint, that AdminController's _lookup method assumes that all your model's REST url is in plural format. If you don't like or need it, you need to do something about it. I will write about this in the next post.

In my opinion, spending time understanding the layer of TG2.1 framework is really worth it. There are many things that already handled by TG2.1 (plus the plugins/extensions), and if there is not enough, simply extend it by subclass the necessary class. The framework is well designed. The only drawback is the lack (I mean really lacking complete, integrated) documentations.

Friday, April 8, 2011

TurboGears 2 Editable Primary Key Field

TG2.1, nice web app framework. I just with it has better documentations. Compare to django, TG2.1 documentations are way behind. But, still it is much more flexible.

I encountered problem that TG2 Admin Page, created with TG Administration System, automatically set the (html) input field of the model's PK to disabled. Sprox assumes that your model's PK is auto-generated.

Some one already reported the issue here
and then created the proper tickets to sprox here

Got a good clue where to check, so after taking a look inside sprox.formbase source code, below is the reason why PK fields are non-editable.
class EditableForm(FormBase):
"..."
def _do_get_disabled_fields(self): fields = self.disable_fields[:]
fields.append(self.provider.get_primary_field(self.entity))
# automatically set PK field disabled return fields

same goes with: class AddRecordForm(FormBase)

I wrote it in here.

Hopefully useful for those who need it.

DOS Error 4

Again, working with old system, 64MB memory, DOS only. Ouch! The OS got corrupted somehow and it was not possible to redirect lpt1 to com1, using c:> mode lpt1=com1.
I had to backup all the data and reinstall the OS. Ouch! Where can I find DOS? Another thing is the bios was way outdated too. No PXE-Boot, no USB boot, no CD-ROM device,and broken floppy disk. Quaduple ouch!

After sometimes, I managed to install DOS and restore the data. It was time to run the application, and all I got was DOS Error 4. :(

To make the long story short, I got some good information from http://www.cms-track.com/Support/webhelp/html/idh_dos_error_4.htm

The solution is:
1. SET CLIPPER=F99 in autoexec.bat
2. In config.sys, put FILES= and BUFFERS=, where
The FILES= must be at least 61. (FILES=61)
The BUFFERS= must be at least 41. (BUFFERS=41)

No more old system please!!!

Tuesday, April 5, 2011

Qt Designer, Add Toolbar

Taken from the official website itfself:
Qt is a cross-platform application and UI framework. Using Qt, you can write web-enabled applications once and deploy them across desktop, mobile and embedded operating systems without rewriting the source code.

Totally awesome! After working a bit with GTK, now it's time for Qt. Of course, the first thing that I work is with the 'Designer'. Now, I really love designing with this tool. Took a bit to adjust with the way to design the UI with it, specially after working with Glade GTK UI Designer. Once, I got used to it, it just felt more natural designing with Designer.

I would like to share how to add toolbar in the Designer. It is not so obvious how to add toolbar, so I hope it is useful.

The way to add a toolbar is by right-clicking on a form, main_window, or what have you, and choose 'add toolbar.'


Right this point, you will not get a functional toolbar, since you only have the container, without any buttons.

Next step is to add resources to the project, using 'Resource Browser'. If you don't have it, you can activate it from the menu bar, under view menu. Then from resource browser, click on the 'Gear' button to open 'Edit Resources' dialog.





Once, you have it opened, add resource file, by click the 'New Resource File' button. Then, click add prefix, and after that add the image files for toolbar icons, by clicking 'Add Files' button.

Now, to add the buttons in the toolbar, simply drag and drop from 'Resource Browser' onto the toolbar, and that is it.

Friday, April 1, 2011

Epson POS TM T81

This is my second pos printer device that I have worked with. The previous one was dot matrix, RS-232, and has no independent power supply adaptor, so it depends on the PC which has that particular of power outlet. Not so good. Terrible, actually.
Epson POS TM T81 is using usb adapter and has its own power supply adapter. So, it is more portable.

The issue is how to communicate with. Previously, I send all the data (with ESC POS commands) directly. The problem with that approach is with the buffer limitation and flow control (handshake). When I send the data too quickly, some data will be discarded when the buffer is full.

After googling for quite a while, I found cups driver for Epson POS TM T88v, tmt-cups-1.2.1.0.tar.gz. Okay, some progress, I thought. I could choose the driver from cups admin and select the ppd provided. The thing is, when I tried to print some document and image, I kept getting error or the document just hang on the document printing queue. Even if I could get the documents printed, it has different font than the default printer font, and did not look good.
I found some error messages in the /var/log/messages, reporting like the following:
"gs[2599]: segfault at 0 ip 00ad346b sp bf9f4860 error 6 in libgs.so.8.71[74c000+46e000]"
Not sure what exactly the problem is, all I know is it failed. I had high hope for this 'official' driver. Back to square one.

So, then back again sending raw ESC/POS commands directly to the printer using lpr -o raw. The problem is again with the handshake, buffer full. Wow, too much work for printing text. The thing is that the communication is done via usb port. Okay, I tried to communicate to the device using pyUSB, but keep getting permission issue. Yeah, I had to manually change the permissions of the /dev/bus/usb/002/... file. Ouch! Another way is by writing udev rules. Again, too much work to be done. Because even if I solved the permission issue, I still have to read the device status. Not good at all. Too much work.

Fortunately, I found very good information from ubuntuforums. This guys solution is simple, which is by adding 'FileDevice Yes' in /etc/cups/cupsd.conf and add printer with Device URI: file:/dev/usblp0 and set it as Local Raw Printer for make and model. Done!

With that configuration, I could write all the ESC/POS command to a (spool) file and send it to cups using lpr. So, simple! No more issue with handhaking.

Next step is for me is to be able to print out image/logo. I could not do it with the official driver, so I have to find some other way. Back to ESC/POS command.
This guy is really nice. In his blog, he explained in detailed about converting image file to monochomre. This could be done by graphic app, but still interesting to know how to do it programmatically. Then from the monochorome image to bitarray and do some matrix transformation and translate it to byte and send those bits to the printer. Yeah... very-very interesting.
I translate his explanation to python code and, after having a bit confusion with calculating the high_byte and low_byte. I figured how to get those values, and everything works beautifully.

Full appreciation to two guys, I mentioned above, who lead me in good direction, and get what I wanted.

Tuesday, February 15, 2011

TurboGears 2 presenting relationship

Time for me to move on and learn TurboGears2.1. Coming from django background, I expected somewhat similar elegance in TurboGears. Something like urlpatterns, which is quite nice. I did not found something like that. A bit disappointment, because urlpatterns really such a wonderful way to map between url requests and the handlers.
The main and biggest issue with django is it's orm. I really wish somehow we can replace it with sqlalchemy and still having all the auto generated Admin Interface.
Sqlalchemy is totally awesome. If you have not used it yet, please spend sometimes with it.

Since, TurboGears supports sqlalchemy, I decided to take a look and play around with it. The awesome thing about TurboGears 2.1 it also have Administration System, which will auto generate the web interface, using python-rum and sprox.

The first time I used the Admin System, I got the UI (tables and forms) for all the tables prepared. The only issue at that time was the relationship displayed is still using 'id'. Quite confusing for the end-user.

Digging more and more about TurboGears 2.1, I learned that all of the transformations from the table schema into web UI are done via sprox, which uses '_name', 'name', (and some other values) as default values for column name matching relationship representation.

Did some more digging and I figured out how to define the desired column for relationship representation.

class: TableFiller
modifier: __possible_field_names__ = [ 'brand', 'first_name' ]
class: AddRecordForm
modifier: __dropdown_field_names__ = [ 'title' ]
class: EditableForm
modifier: __dropdown_field_names__ = [ 'name', 'info' ]

Those classes could be imported using the following:
from sprox.fillerbase import TableFiller
from sprox.formbase import AddRecordForm, EditableForm

TableFiller is for viewing, AddRecordForm is for adding new record, EditableForm is of course for edit existing record.

So, what you can do is subclass those classes, according to your need, and set either __possible_field_names__, __dropdown_field_names__ or both, to the appropriate columns. Then store the subclasses in the controller (project/app/controllers/root.py, for example).

Monday, January 24, 2011

xubuntu alternate install on usb stick

I am working on a project which has to do with very old and low computing power computer and low memory. So, I had to choose a reasonable linux distro with reasonable desktop manager, user interface, etc.
After trying different very lightweight distros, I am back with xubuntu. I have my reasons, which mainly is that I am so familiar with this one.
I tried lubuntu, which uses LXDE, but this old computer could not handle it.
Due to that, I chose xubuntu alternate installer. So, I prepared xubuntu alternate installer in a 2GB usb stick, since the old computer does not have any optical drive.

I faced a problem during installation using the usb media, because the installer keep searching for cdrom (optical) drive. It was terrible. I tried to go to virtual terminal and tried to mount the usb stick manually as a cdrom, but got problem.

Thankfully, I found the way to do it successfully from ubuntu site. http://ping.fm/nDHWx Issues
The required action to solve the problem is simply insert/append the following argument:
'cdrom-detect/try-usb=true'
in the grub boot menu. To do that, just choose which installation method you want to use, in the grub menu and press 'E' to edit the command, and append the argument, which I mentioned earlier, at the end.

Now, enjoy the lightweight installer and the distro itself.