Archive for the ‘Computer’ Category

Due to this book repeatedly coming up in conversation regarding the super smart people, it seems to be worth reading. It is really old, and should obviously be out of copyright (thanks Disney!), however it possibly isn’t and in any case I couldn’t find a useful PDF.

I did however find the above. Now, it seems to lack a download all function, and it’s too much of a hassle to download all 398 pictures manually. They also lack OCR. So, I set out to write a python script to download them.

First had to find a function to actually download files. So far I had only read the page source of pages and worked with that. This time I needed to save a file (picture) repeatedly.

Googling gave me: urllib.urlretrieve

So far so good right? Must be easy to just do a for loop now and get it overwith.

Not entirely. Turns out that the pictures are only stored temporarily in the website cache if one visits the page associated with the picture. So I had to make the script load the page before getting the picture. Slows it down a bit, but not too much trouble.

Next problem: sometimes the picture wasn’t downloaded correctly for some reason. The filesize however was a useful proxy for this purpose. So had to find a way to get the filesize. Google gave me os.stat (import os). Problem solved.

So after doing that as well, some pictures were still not being downloaded correctly. Weird. After debugging, it turned out that some of the pictures were not .gif but .jpg files, and located in a slightly different place. So then I had to modify the code to work that out as well.

Finally worked out for all the pictures.

I OCR’d it with ABBYY FineReader (best OCR on the market AFAIK).



The python code is here:

So, i thought that is a good idea to take a copy of all the interesting threads on varius forums, just in case they shut down. doing it manually is waste of time. so, i went coding and make a crawler. after spending a couple of hours, i have now made a crawler that reads lines from a txt file, and downloads pages into folders from that.

code is here: Forum

or here:

import urllib2
import re
import os

def isodd(num):
    return num & 1 and True or False

#open data about what to rip
file0 = open("Forum threads/threads.txt",'r')

#assign data to var
data0 = file0.readlines()

#hard var
number = 0

#skip odd numbers
for line in data0:

    #if it is even, then set first var, and continue
    if not isodd(number):
        outputfolder = line
        outputfolder = outputfolder[:-1]
        number = number+1

    #get thread url and remove last chars (linebreak +
    if isodd(number):
        threadurl = line
        threadurl = threadurl[:-2]
        number = number+1
        print "starting to crawl thread "+threadurl

    #create folder
    if not os.path.isdir("Forum threads/"+outputfolder):
        os.makedirs("Forum threads/"+outputfolder)

    #var introduction
    lastdata2 = "ijdsfijkds"
    lastdata = "kjdsfsa"

    #looping over all the pages
    for page in range(999):
        #range starts at 0, so +1 is needed
        response = urllib2.urlopen(threadurl+str(page+1))

        #assign the data to a var
        page_source =

        #used for detection of identical output
        #replace data in var2
        lastdata2 = lastdata

        #load new data into var1
        lastdata = page_source

        #check if they are identical
        if page>0:
            if lastdata == lastdata2:
                print "data identical, stopping loop"

        #alternative check, check len
        if page>0:
            if len(lastdata) == len(lastdata2):
                print "length identical, stopping loop"
                #used for detection of identical output

        #create a file for each page, and save the data in it
        output = open("Forum threads/"+outputfolder+"/"+str(page+1)+".html",'w')

        print "wrote page "+str(page+1)+" in "+outputfolder+"/"
        print "length of file is "+str(len(page_source))

It was mentioned by TechDirt in their reporting on an absurd copyright case (so, pretty normal).



The strange choice of the author to use identation to mark borders between paragrafs when indents are very important in python. He cud just have used newlines to do that.



The code is not easily copyable. If one tries, one gets spaces between every character or so like this: >>>cho i c e = ’ham’. This seems to be due to the font used.



Sometimes the examples are not clearly enuf explained. For instance, elif is explained as an “optional condition”, which is not all that clear. Fortunately, this is not much of a problem if one has an IDE ready to test it. For the record, elif works as an alternative condition if the first one isnt true. Ex.:

a = 1

b = 2

c = 3

if a == 1:

print “a holds”

elif b == 2:

print “b holds and a doesnt”

elif c == c:

print “neither a or b holds, but c does”

>>a holds


a = 0

b = 2

c = 3

if a == 1:

print “a holds”

elif b == 2:

print “b holds and a doesnt”

elif c == c:

print “neither a or b holds, but c does”

>>b holds and a doesnt


a = 0

b = 4

c = 3

if a == 1:

print “a holds”

elif b == 2:

print “b holds and a doesnt”

elif c == c:

print “neither a or b holds, but c does”

>>neither a or b holds, but c does


Note how the order of the elif’s matter. An elif only activates when all the previous if’s and elif’s failed.



Python apparently does not understand how to add numbers and strings. So things like:

a = “string”

b = 1

print a+b

gives an error. It seems to me that one shud just have python autoconvert numbers to strings (using str function), just as python converts integers to floats when adding such two objects together. Perhaps this has changed in python 3ff. Im running 2.7.




I was going to write some criticism of this game. It is not that bad, it’s a good game, but Blizzard made so many unforgivable (becus it’s Blizzard) mistakes, that it seriously questions the normal principle that Blizzard is a sign of good quality. Luckily for me, someone has already done this for me in a series of excellent videos. Watch them if u care about game studies, Diablo series and Blizzard.

Link to article.

Ever been annoyed by the overly long links one gets when one tries to copy a link from Google? Here is an example:

These are very annoying and their function is just to allow Google to track and see which link you actually clicked. Fortunately, there is a way to get rid of them (at least if one is using Firefox which is my favorite browser).

So, after installing GreaseMoney and a suitable script (I tried one first that didn’t work), here is what happens when I copypasta the link again:

Two days ago, Blizzard posted some statistical data about the top 200 (really is 199) players in Europe. I did a bit of statistical analysis on that data and made some nice illustrations as seen below. The data is self-explanatory, but the explanation is unknown to me. It need not necessarily be the case that Terran is overpowered. It may be that players simply like playing it better than the other races.

Automatic addition of trackers

With the probable closure of The Pirate Bay (TPB) in the near future the torrent scene will be more decentralized. Especially if the TPB tracker goes down. Luckily there are new upcoming trackers to take its place. The problem is that they are not in many of the torrent files. Besides adding the trackers manually there are some ways to fix it. One way is to re-upload all torrents with more trackers. That doesn’t seem like a plausible method. Another way is the have an indexer site (like TPB, Mininova or BTjunkie) update all the torrents with more trackers. This doesn’t seem like a bad idea.

Another idea, and this is my idea for the client, is to have an option in the client to automatically add more trackers to all torrents loaded. That would be very useful for torrents that do not include any new public tracker such as Openbittorrent (OBT)1 or Publicbittorrent (PBT)2. There could be an editable list of trackers to be automatically added, or there could be function that shares all public trackers for all torrents though I’m unsure of what the consequences will be of that. Even more, one could have peers share public trackers. In that way any new public tracker would quickly spread. There is reason to be cautious, however, a third party, say RIAA, may set up such a public tracker just to sniff IPs.

Automatic force-seeding of torrents with no seeds

The goal is to keep rarer torrents alive easier. Currently I have to manually look through my torrent client for torrents that currently have no seeds and then force seed them so that the peers can finish their download.3 It should probably force seed for a set time. If it forced-seeded only until there were at least two seeds, then complications arise. Imagine that there are two people who are the only seeds and both their torrent clients have this feature. Both their clients would then force seed the torrent until they spot another seed, but then they can see each other and so they will keep bouncing on and off force seeding while the peers would not get much data. A mix of these two ideas may be a good idea.


3We’re presuming here for simplicity that if there is no seed, then there is not a complete file in the swarm. This is false in some cases.