Category Archives: Tutorials

Sync Your Stuff to S3

This is a receipe how I save stuff to S3 from my Mac:

1.) Signup with S3: (check pricing!). This will give you access to the AWS Management Console.

2.) Create a Bucket: This can be done via the AWS Management console. If you are not familiar with the concept of ‘buckets’ check-out the S3 documentation. Simply put, it is a virtual storage device that has a fixed geographical location.

3.) Go to ‘Security Credentials’ in your account settings in the AWS Management console and create an accesskey.

4.) Download JetS3t. You then have the following directory on your Mac:

Open the Terminal and change into the bin directory:
$ cd bin;

Create a file named there:
$ nano;
and save the following content using your keys from step 3:


5.) To sync the contents in the path /Users/marco/MySyncStuff with the bucket myBucketName use this command:
$ ./ UP myBucketName /Users/marco/MySyncStuff/ –properties;

Of course you can create as many buckets as you like and script and schedule your data syncs now from here as you wish. Use the command
$ ./ –help
to see what the else has on offer.

6.) Browsing buckets: JetS3t has its own S3 browser. To start and use it do the following:

$ cp cockpitlite.command;
$ ./cockpitlite.command &;

You should see the Java coffee cup on your task bar. Use your keys to log in and browse your buckets.

You can also use the free S3 Browser for Mac.

Free XSD Editor – Generate XSDs from XML

From an old post: To start out with XML-Schema this might be of interest to you:


Free graphical Tool:

[2009-06-27] Update: A very cool feature is the generation of a Schema, based on example-XML files you give Liquid XML Studio! I discovered this when I built a Schema that would not validate against my desired XML structure and I could not figure out why. I generated the XSD like this:

  • Open one of your XML examples with Liquid XML Studio.
  • Select from the menu “Tools / Infer XSD Schema”.
  • You are asked for more examples. I did it with just one XML file.
  • And bingo: You have your XSD.

As I did a diff of my handcraftet version of the XSD and the generated one, it revealed the reason for not validating nicely for me ;).

Cool tool, you can use with the 30-day trial licence.

Start longer running Jobs with screen

On the command line if you close a console with a running job, you kill the job. This is different with the tool ‘screen’, where you can attach and detach from a ‘screen’ without terminating it. You can even start a job in a screen on another machine, detatch, travel somewhere else and re-attach to it on another machine.

If you do not have screen yet, install it on your Debian box with: apt-get install screen


  • screen -S indexing – Create a screen with name ‘indexing’.
  • screen -ls – Show available screens.
  • screen -r indexing – Re-attach to the screen ‘indexing’.
  • Strg-A, Strg-D – Detatch from a current screen (without terminating it).
  • exit OR Strg-D – Exit from a current screen. This terminates the screen session.

Trigger and Organize Timed Tasks with ‘at’

Since I am doing more and more stuff on the commandline, I noticed that sometimes I just wait for some task to finish to do a next step in a sequence to accomplish a certain goal. What if a running task would take an estimated 3-8 hours and it is Friday afternoon?

In this case you can use the commandline-tool ‘at’ and schedule something like ‘at 1am tomorrow do xyz’.

If you do not have at yet, install it on your Debian box with: apt-get install at

There are 3 commands available:

  • at <datetime> – Starts the scheduling-dialog for a specific date or/and time to execute certain commands.
  • atq – Shows a list of already scheduled and pending jobs.
  • atrm <job-id> – Deletes pending jobs you would like to remove from the job-queue.

Case: What do you have to do in order to set a sequence of commands as root at a specific time?

Let’s say it is Friday 15:00h (and the server-clock says that too) and you would like to fire the command

  • echo “Good morning!” > hello.txt @ 08:00h tomorrow

You would do the following:

  • Log in as root.
  • Type ‘at 8.00′ <enter>.
  • Since for today 08.00 is already in the past, at assumes that you mean tomorrow. But you can use all sorts of time- and date signatures, for example you could use ‘at 08:00 01.06.2008′.
  • Now the prompt changes to at> and waits for your commands to be executed sequentially at the time you specified.
  • Now you type your first command to be executet: ‘cd ~/stuff’ <enter> – to make sure where the next command is executed, since it writes a file.
  • Now type your 2nd command: ‘echo “Hello 8.00″ > hello.txt’ <enter>
  • The prompt shows another at>. If you have more commands add them. In our example we have only two. To finish we press STRG-D and you have your normal prompt back.
  • Type the command ‘atq’ to see the pending jobs in the queue. Your job, you just added has a job-ID in the first column.
  • To delete the job from the queue type ‘atrm <job-id>’.

Good timing!

Use svn:externals to integrate external libraries

As projects get more complex, I came to use Subversion’s externals. This SVN property enables you to use more than one ‘checkout’ inside your architecture. For example if you are using Zend Framework as an external library, why make a ‘hard’ copy of it? It gets out of date very quickly, right? Externals allows you and your co-workers to check out a working-copy of a particular revision of it (if you want) and maintain it as ‘external’. This means you can use ‘svn update’ not only on your code but also on Zend Framework or any other external libraries you are using in your project architecture. Precondition is, you have access to a Subversion repository that contains the libraries. Externals are very well explained at:

Here is a command-line example to setup and define an external:

Let’s say you would like to use the external library ‘external-lib’ in your project. Change dir to a directory, where you have your libraries and enter the following command:

  1. Create a new directory inside your working copy and change dir into that directory:
    /home/web/projectname/htdocs/libs$ mkdir external-lib;
    /home/web/projectname/htdocs/libs$ cd external-lib;
  2. Check out a working copy of the desired external library:
    /home/web/projectname/htdocs/libs/external-lib$ svn co .;
  3. Change dir back again and declare your checkout external:
    /home/web/projectname/htdocs/libs/external-lib$ cd ..;
    /home/web/projectname/htdocs/libs$ svn propset svn:externals “external-lib″ “external-lib”
  4. Commit your changes:
    /home/web/projectname/htdocs/libs$ svn commit -m “External for external-lib set.”

Now every time you update your working copy, updates are fetched from your external(s) too.

Profiling your App with XDebug and KCacheGrind

If profiling of applications is something you already heard of but never played with, here is what I learned from starting to dig into this topic:

(1) Install XDebug (if dou do not already have it on your Dev-System). If not, please refert to the XDebug-Docs. The procedure is very well documented there. I used the pecl installer.

(2) I added this content to my php.ini files (for Apache2 and CLI). In the “Switch profiling”- and “Switch tracing”-sections you can comment the lines “xdebug.profiler_enable = 1/0″ and “xdebug.auto_trace = 1/0″ in or out. If you leave profiling and/or tracing on, it will significantly slow down all PHP execution on your server and generate many large cachegrind-files. It is not wise to leave it on if you do not need to profile or trace!

If I decide to do profiling, I switch it on, do my script executions to be profiled, which produces the required cachegrind-files and then I switch it off again. Remember you have to restart Apache2 after each change to your php.ini in order to take effect.

Set the path to your directory where you like to get the cachegrind-files written to. I set it to ‘/var/www/__profiling_data’.

(3) Now you need a viewer. I started under Windows with the free Tool WinCacheGrind. But to be honest, I did expect something cooler… As I tried KCacheGrind, which is a free Tool for Linux I was pleased to see informative screens like the following:

KCacheGrind Screen 1 KCacheGrind Screen 2

For those of you who do not know how to use Linux Apps… On Debian do “apt-get install kcachegrind” and once it is installed, type “kcachegrind &” on a console after logging in in the GUI-interface of Linux.

I was surprised at which points my scripts spent their time. Check the XDebug-Docs for more information on what you acually can read from the profiledata.

Happy profiling!

Build Functional Test-Suites with Selenium

I am currently writing lots of Selenium tests for a website with lots of form based enquiries. On my way I came accross some questions and obstacles to be overcome. This is a wrap-up of stuff I discovered:

Tools you need:

Links you might check out for background information and examples:

I did it this way:

  • Use Selenium IDE to record my tests roughly.
  • Install/upload Selenium Core onto the target system (this is necessary since Selenium Core is based on JavaScript and thus must be loaded from the same domain – keyword: XXS security in browsers, it would otherwise prevent access of Selenium Core to the page content) .
  • I organized my Tests in a directory structure like Testarea/Tests.html.
  • I manually updated all single tests using xpath-syntax to check/select items or click buttons or links. As default Selenium IDE took id-attributes which are in my case generated at random (dont ask me why) and did not work on repeated tests when pages were reloaded. Doing this the Web Developer-toolbar is very helpful. Click the button ‘Forms’ and select ‘Display form details…’ and you see additional information right on your page under test, which helps you do the xpath stuff. Here are some examples:
    • Push a submit button
      • Command: clickAndWait
      • Target: //input[@type='submit' and @name='send']
      • Value:
    • Make a selection on a dropdown list:
      • Command: select
      • Target: //select[@name='item9']
      • Value: value=1
    • Check a radio button:
      • Command: check
      • Target: //input[@type='radio' and @name='field3' and @value='1']
      • Value:
    • Click a link:
      • Command: clickAndWait
      • Target: //a[@title='Link zur Seite Erholungsstrategie']
      • Value:
    • Enter a value into an inputbox:
      • Command: type
      • Target: //input[@type='text' and @name='age']
      • Value: 32
    • After you opened and closed a popup, return to main window:
      • Command: selectWindow
      • Target: null
      • Value:

Organize all tests in a test-suite and execute them with Selenium Core:

  • Build a file like this one here with relative links to your test files: TestSuite.html
  • To execute a test-suite open Selenium Core in a Browser, enter the path to yourtest-suite in the upper left frame. A list of all your tests in the test-suite should appear. You are ready for executing them now.

My personal tips:

  • Take a look at the tab ‘Reference’ and play around with different commands.
  • When rewriting xpath expressions, occasionally click on the find-button next to it. It will give you an error or mark a found target with a green frame on the page for a second.
  • If you execute tests on a slow server it sometimes happens that an xpath coordinate will not find its target (since the page has not yet been completely loaded). Try to execute your test step by step at a slower pace.
  • In my case of testing enquiry forms I had to test several succeeding, session-based forms with error alerts on false entries etc. I had to delete session cookies very often via the Web Developer-toolbar (Cookies/Clear Session Cookies).

Further things to try:

  • Get Selenium RemoteControl, install and run it on your server and generate PHPUnit test code with Selenium IDE from your existing tests. As far as I know, this way you can even test for compatibility of different browsers and automate these tests for your builds.