Duolingo: The Next Chapter in Human Computation

Amazing idea by the founder behind ReCAPTCHA!

With Duolingo you learn a language for free, and simultaneously translate the Web.

If Duolingo gets very popular the whole English Wikipedia could be translated in 80 hours by approximately 1 million people. There are currently 1.2 billion people trying to learn another language - the potential is definitely there!

Code · Design · Interesting · Stuff Permanent link 9. Jun

It's a marathon and not a sprint

It does not matter what you do the next 2 weeks, but what you do the next 15 years. This is a marathon and not a sprint. We should focus on the long term and not the short term.

Overworking yourself for a few years and stressing over small things can easily lead to a burnout. Focusing on the superficial and easy things can be rewarding, but only for a little while. Achieving greatness requires many years of focused devotion where you focus on the big picture.

Even a lot of research shows that there isn't an easy way to master anything. Researchers have shown it takes about ten years to develop expertise in any of a wide variety of areas, including chess playing, music composition, telegraph operation, painting, piano playing, swimming, tennis, and research in neuropsychology and topology... Read more about this in Teach Yourself Programming in Ten Years by Peter Norvig.

If you are willing to spend 15 years on something your idea pool opens up and you can tackle much harder and more interesting problems. If you work 8 hours pr. weekday and do this for 15 years then it's 31.200 hours. Please spend it on something interesting and worthwhile!

Life · Psychology · Tips Permanent link 6. Jun

Finding Your Way as an Entrepreneur

Great talk by Dropbox co-founder Drew Houston [perma link at Standford]:

Education · Interesting · Stuff · Tips Permanent link 5. Jun

Programming is an art

Pony magic

We programmers have a bad reputation. The general thinking is that we are asocial robots that create code for computers to understand. I think we are artist and magicians that create something out of nothing. Programming is a creative process, it's an art and it's important to nourish our creativity!

I have been thinking why I love programming and the bottom line is that I love creating stuff. I like to imagine how things can be. Using only a computer I can create anything I can imagine and I can easily share my creations with other people. It's not many professions that enable you this.

One of the worst things that can happen to a programmer is to become a code monkey. Only implementing things that other people imagine. Losing creativity. Following orders. Don't let this happen to you! Here is a tip on how to nourish your creativity, at least a little every week. It's very simple:

Allocate 4 hours pr. week where you can create anything you want. Focus on creating. Don't let anybody interrupt you. Move to a cafe. Lock your office. Don't consume. Don't read Hacker News. Don't consume Twitter. Use the 4 hours to create something!

Want to get a bit inspired? Watch this video of John Cleese about creativity - it's great!

To be creative, you have to create a kind of oasis in your life.
Boundaries of space and boundaries of time.
– John Cleese

Code · Life · Psychology · Tips Permanent link 30. May

The ultimate vimrc on Github

I have released a new version of my Vim configuration!

Read the documentation and get it from here:

The ultimate vimrc screenshot

Awesome coding music

You need some awesome electronic music when you code. Here are some of my favorite:

If you want to recommend me some artists/tracks write an email to amix@amix.dk ;-)

Life · Stuff · Tips Permanent link 25. May

WSGI Production Setup: uWSGI, supervisor and nginx


In this post I present a uWSGI, supervisor and nginx setup, which is probably the currently best way to run WSGI applications (including Django).

For Todoist and Wedoist we ran CherryPy's WSGI server for a long time. CherryPy has served us well, but after upgrading to a recent version we ran into deadlock issues. This forced us to look for alternatives, since we could not figure out what the problem was (and it's unsustainable to downgrade to a very old version).

Why we picked uWSGI?

  • It's implemented in C and a lot of benchmarks point that it's one of the fastest WSGI servers. benchmark 1 benchmark 2
  • nginx ships with uWSGI support since version 0.8.40
  • The support between nginx and uWSGI is great since they speak over an optimized protocol instead of HTTP
  • It supports LOTS of features, one of the best is killing and restarting dead processes
  • It supports graceful restart of servers (without losing any requests!). This is great since we do multiple deployments pr. day
  • It seems to be well supported and under active development
  • A lot of others are choosing it as their production setup

How to run it behind supervisor

supervisor lets you easily control and monitor other processes (and restart them if they crash).

Our /etc/supervisor.d/uwsgi looks something like this:



supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface


command = /usr/local/bin/uwsgi -s 
          --file /home/ubuntu/todoist/uwsgi_todoist.py --callable app 
          --processes 2 -t 60 --disable-logging -M --need-app -b 32768

The interesting part of the uWSGI configuration are following:

  • -t 60: If a process is not responsive for 60 seconds it is killed and restarted
  • --disable-logging: We disable logging, since we use a central log
  • -M: indicates master mode
  • --need-app: If the app crashes on start then uWSGI crashes as well
  • -b 32768: Sets a bigger buffer size. We needed this because we ran into a invalid request block size error

nginx setup

Our nginx setup looks something like this. Do note that we are using nginx'es upstream to distribute the load (for CherryPy we used haproxy):

http {

    upstream todoist_uwsgi { 
        server localhost:14001; 
        server localhost:14002; 


    server {


        location / {
            error_page 502 /error_502.html;

            include uwsgi_params;

            uwsgi_param X-Real-IP $remote_addr;
            uwsgi_param Host $http_host;

            uwsgi_pass todoist_uwsgi;



Special case for slow requests

In Wedoist it's possible to upload large files.

To bypass this you must handle uploading requests specially and set a much larger timeout.

For Weodist we are running servers that only handle uploads:

command = /usr/bin/uwsgi -s 
          --file /home/ec2-user/wedoist/uwsgi_wedoist.py --callable app 
          --processes 4 -t 6000 --disable-logging -M --need-app -b 32768

As you can see -t is set to 6000 seconds, which means that the process/request will be considered dead after not responding for 6000 seconds.

In nginx configuration we also set uwsgi_read_timeout and uwsgi_send_timeout to 6000 and we redirect upload requests to a special upstream servers:

location /Uploader/attachFile {
    error_page 502 /error_502.html; 

    uwsgi_read_timeout 6000;
    uwsgi_send_timeout 6000;

    include uwsgi_params;

    uwsgi_param X-Forwarded-Proto https;
    uwsgi_param X-Real-IP $remote_addr;
    uwsgi_param Host $http_host;
    uwsgi_pass upload_cores;

That's about it. Hope somebody will find this useful :-)

Happy hacking!

Code · Code improvement · Python · Tips · Todoist · Wedoist Permanent link 17. May

Displaying timezones better in Python

I have released timezones for Python, it makes timezones more user-friendly for the users by formatting timezones better and auto-guessing timezone based on the user's IP address.

You can:

The library provides:

  • User friendly rendering of common timezones. pytz.common_timezones includes 430 common timezones, without any smart sorting or display of useful information such as timezone offsets. This provides an awful experience if you just present this to the users - - like it's done in Django
  • Auto-guessing a user's timezone based on the user's IP. This is done via pygeoip
  • Supporting fixed offsets timezones, such as "GMT +1:00"

Example usage of the library:

from timezones.tz_rendering import html_render_timezones

html_timezones = html_render_timezones(select_name=timezone',
                                       first_entry=_o principe de persia dubladoo predador dubladoo polnocy w paryzu plgang bang melayu o principe das mulherespractice thin layer chromatography dobbins o predador 1 torrent dublado ('Select your timezone'))

A screenshot of the library in use [in Wedoist]:

python-timezones screenshot

Updating caching headers for Amazon S3 and CloudFront

I made a major blunder when setting caching headers for Amazon S3 and CloudFront. Making such a blunder makes my sites slower and costs more in bandwidth. In this little blog post I will detail how to fix this and make sure you use correct caching headers.

Use the correct syntax

The first rule, make sure that the syntax is correct. Correct syntax looks like this:

  • Cache-Control: max-age=155520000, public
  • Expires: Sat, 29 Apr 2017 13:31:45-0000 GMT

For me, I used following syntax (it's wrong and wont be understood by browsers!):

  • Cache-Control: max-age 155520000

Read more in RFC 2616 for all the details sounding headers.

Be greedy and use file versioning

Use file versioning (for example make md5 hash a part of the name). You are forced to do this anyway since CloudFront does not support invalidations that well.

o poeta do desterroo principe do egito download em portugues o poderoso bengohow_to_install_khmer_unicode_on_samsung_galaxy_s2wmv o poder honda byrnelivenudegirls18com o principe do egito dublado downloado preo do amanh dublado Already using file versioning? Great, then set your expires a lot of years in the future, since the filename will change when the files changes (i.e. you don't have to worry about invalidating old files).

Made a blunder? Use my script to update all S3 files in a bucket

Before you update headers to every S3 object make sure that the code works by testing it on dummy objects. I had a lot of issues getting it to work, since it will replace older metadata and not just update it. You can use my script (but it's not bulletproof, so be sure that any missing headers that you use are copied over to the updated metadata).

You will need to do following:

  • Using the script below test it out on dummy S3 objects
  • Update headers for every S3 object
  • Create new Amazon CloudFront distributions after the S3 objects are updated. Can be done via aws.amazon.com
  • Update DNS records to use the new distributions
#!/usr/bin/env python

    Updates S3 objects with new cache-control headers.

        python fix_cloudfront.py <bucket_name> <keys>*

        Updates all keys of avatars.wedoist.com bucket::
            python fix_cloudfront.py avatars.wedoist.com

        Updates only one key::
            python fix_cloudfront.py avatars.w.com d39c2.gif

    Read more here::

    :copyright: by Amir Salihefendic ( http://amix.dk/ )
    :license: MIT
import sys
import mimetypes
import email
o poder secreto pdfo presidentepatrica_sereia_more_trans_obsessions o poder infinito da mente lauro trevisano principe do egito download avio povo contra larry flynt rar import time
import types
from datetime import datetime, timedelta

from boto.s3.connection import S3Connection
from boto.cloudfront import CloudFrontConnection

#--- AWS credentials ----------------------------------------------
AWS_KEY = '...'
AWS_SECRET = '...'

#--- Main function ----------------------------------------------
def main(s3_bucket_name, keys=None):
    s3_conn = S3Connection(AWS_KEY, AWS_SECRET)

    bucket = s3_conn.get_bucket(s3_bucket_name)

    if not keys:
        keys = bucket.list()

    for key in keys:
        if type(key) == types.StringType:
            key_name = key
            key = bucket.get_key(key)
            if not key:
                print 'Key not found %s' % key_name

        # Force a fetch to get metadata
        # see this why: http://goo.gl/nLWt9
        key = o poderoso chefo parte ii legendadoo poveste incalcitao predador download dublado torrento prehrani sto kada i kako jestisybil danning private passions megavideo o poder rhonda byrne pdf bucket.get_key(key.name)

        aggressive_headers = _get_aggressive_cache_headers(key)
        key.copy(s3_bucket_name, key, metadata=aggressive_headers, preserve_acl=True)

        print 'Updated headers for %s' % key.name

#--- Helpers ----------------------------------------------
def _get_aggressive_cache_headers(key):
    metadata = key.metadata

    metadata['Content-Type'] = key.content_type

    # HTTP/1.0 (5 years)
    metadata['Expires'] = '%s GMT' %\
            time.mktime((datetime.now() +

    # HTTP/1.1 (5 years)
    metadata['Cache-Control'] = 'max-age=%d, public' % (3600 * 24 * 360 * 5)

    return metadata

if __name__ == '__main__':
    main( sys.argv[1],
          sys.argv[2:] )
Code · Python · Stuff Permanent link 2. May

Product features vs. UI elements on the screen

Like noted in The essence of minimal product design successful products hide complexity from the users. Balsamiq has a great graph illustrating this as well.

 features vs. buttons

from The Balsamiq Mockups Manifesto

Design · Stuff · Todoist · Wedoist Permanent link 28. Apr

Hiring talented iOS and Android developers

We are expanding the team at Todoist and Wedoist with iOS and Android programmers.

Some of our stats:

  • Over 300.000 users
  • Rapid growth
  • Our business is profitable
  • We started fulltime with the company just 8 months ago. Imagine the future :-)

Join us either freelance or full-time and work on something that makes the world more productive.

Send your resume to amix@amix.dk, be sure to include some code you are proud of (or a link to your GitHub/BitBucket profile).

Announcements · Code · Todoist · Wedoist Permanent link 15. Mar

Open sourced coffee-watcher, less-watcher and watcher_lib

I have updated/published following libraries today:
  • coffee-watcher: a script that can watch a directory and recompile your .coffee scripts if they change
  • less-watcher: a script that can watch a directory and recompile your .less scripts if they change
  • watcher_lib: A library that can watch a directory and recompile files if they change. Can be used to build watcher scripts

Basically these scripts are useful for development as you don't need to think about recompiling your files. You can also use watcher_lib to implement custom watchers.



sudo npm install coffee-watcher
coffee-watcher -p [prefix] -d [directory]

  -d  Specify which directory to scan.
  -p  Which prefix should the compiled files have?
      Default is style.coffee will be compiled to .coffee.style.css
  -h  Prints help



sudo npm install less-watcher
less-watcher -p [prefix] -d [directory]

  -d  Specify which directory to scan.
  -p  Which prefix should the compiled files have?
      Default is style.less will be compiled to .less.style.css
  -h  Prints help



sudo npm install watcher_lib

How to build a generic watcher (here is less-watcher's implementation):

# Use `watcher-lib`, a library that abstracts away most of the implementation details.
watcher_lib = require 'watcher_lib'

# Searches through a directory structure for *.less files using `find`.
# For each .less file it runs `compileIfNeeded` to compile the file if it's modified.
findLessFiles = (dir) ->
    watcher_lib.findFileso polnocy w paryzu download pl240x400 landscape motion sensers java o povo contra larry flynt dubladoo prazer de sua companhiao principe do egito avi dubladoufs_suite_setup_v27 o principe do egito dubladobaseofmp3com_lmfao__everyday_i39m_shuffling  ('*.less', dir, compileIfNeeded)

# Keeps a track of modified times for .less files in a in-memory object,
# if a .less file is modified it recompiles it using compileLessScript.
# When starting the script all files will be recompiled.
WATCHED_FILES = o primeiro mentirosoo preo do amah dubladoo poder do rtmo 2lectra diamino fashion v5r2 mult rar o poderoso thor bloch 01 1 1975 superxis scans cbro presente dublado download {}
compileIfNeeded = (file) ->
    watcher_lib.compileIfNeeded(WATCHED_FILES, file, compileLessScript)

# Compiles a file using `lessc`. Compilation errors are printed out to stdout.
compileLessScript = (file) ->
    fnGetOutputFile = (file) -> file.replace(/([^\/\\]+)\.less/, "#{argv.p}$1.css")
    watcher_lib.compileFile("lessc #{ file }", file, fnGetOutputFile)

# Starts a poller that polls each second in a directory that's
# either by default the current working directory 
# or a directory that's passed through process arguments.
watcher_lib.startDirectoryPoll(argv.d, findLessFiles)

css_image_concat: Improve performance by concating your images

I have updated my CSS image contact script from 2007. This script can concat images into one image and create a CSS file with classes. This is a super useful optimization when you want decrease issuing a lot of HTTP requests due to a lot of small images (like icons).

What is the idea behind this?

The idea is to take all separated images and concat them to one image file:

Concat images

o prazer secreto livro downloadsunny leone ki chut com o prazer e todo meuebook pemrograman berorientasi objek java o principeo povo de deus no desiste noemi nonatoo primeiro amor This means that only one HTTP request is made to fetch all the images.

CSS is then used to display an image (by using background offsets):

.cmp_email_icon {
    background: transparent url(all_images.gif) 0 -48px no-repeat;
    width: 21px;
    height: 16px;

Installing it

sudo pip install css_image_concat

This script also requires ImageMagick.

Using it

$ css_image_concat static/icons static/all_icons.png static/all_icons.css
Parsed 18 in static/icons
Written CSS file: static/all_icons.css
Written image file: static/all_icons.png

GitHub and PyPi

You can checkout the code from here:

o polemarxos toy theoy downloado principe do egitoallshare samsung apk htc o principe de shaulin Announcements · Code improvement · Python · Tips Permanent link 20. Feb

Focusing is about saying no - Steve Jobs (WWDC'97)

When Steve Jobs returned back to Apple in 1997 he fired thousands of people and discontinued lots of projects. In WWDC'97 he explained why:

Apple suffered for several years from lousy engineering management. There were people that were going off in 18 different directions... What happened was that you looked at the farm that's been created with all these different animals going in all different directions, and it doesn't add up - the total is less than the sum of the parts. We had to decide: What are the fundamental directions we are going in? What makes sense and what doesn't? And there were a bunch of things that didn't.

Focusing is saying yes, right? No. Focusing is about saying no. You've got to say, no, no, no. The result of that focus is going to be some really great products where the total is much greater than the sum of the parts.

Design · Interesting · Stuff Permanent link 20. Feb
© Amir Salihefendic. Powered by Skeletonz.