Is the summed cubes equal to the squared sum of counting integer series?

R can tell us:

DF.numbers = data.frame(cubesum=numeric(),sumsquare=numeric()) #initial dataframe
for (n in 1:100){ #loop and fill in
  DF.numbers[n,"cubesum"] = sum((1:n)^3)
  DF.numbers[n,"sumsquare"] = sum(1:n)^2

library(car) #for the scatterplot() function
scatterplot(cubesum ~ sumsquare, DF.numbers,
            smoother=FALSE, #no moving average
            labels = rownames(DF.numbers), id.n = nrow(DF.numbers), #labels
            log = "xy", #logscales
            main = "Cubesum is identical to sumsquare, proven by induction")

#checks that they are identical, except for the name
all.equal(DF.numbers["cubesum"],DF.numbers["sumsquare"], check.names=FALSE)



One can increase the number in the loop to test more numbers. I did test it with 1:10000, and it was still true.

Comments on Learning Statistics with R

So I found a textbook for learning both elementary statistics much of which i knew but hadnt read a textbook about, and for learning R. book is free legally

Numbers refer to the page number in the book. The book is in an early version (“0.4″) so many of these are small errors i stumbled upon while going thru virtually all commands in the book in my own R window.



These modeOf() and maxFreq() does not work. This is because the afl.finalists is a factor and they demand a vector. One can use as.vector() to make them work.



Worth noting that summary() is the same as quartile() except that it also includes the mean.



Actually, the output of describe() is not telling us the number of NA. It is only because the author assumes that there are 100 total cases that he can do 100-n and get the number of NAs for each var.



The cakes.Rdata is already transposed.



as.logical also converts numeric 0 and 1 to F and T. However, oddly, it does not understand “0” and “1”.



Actually P(0) is not equivalent with impossible. See:



Actually 100 simulations with N=20 will generally not result in a histogram like the above. Perhaps it is better to change the command to K=1000. And why not add hist() to it so it can be visually compared to the theoretic one?


hist(rbinom( n = 1000, size = 20, prob = 1/6 ))


It would be nice if the code for making these simulations was shown.



“This is just bizarre: σ ˆ 2 is and unbiased estimate of the population variance”





Typo in Figure 11.6 text. “Notice that when θ actually is equal to .05 (plotted as a black dot)”




“That is, what values of X2 would lead is to reject the null hypothesis.”



It is most annoying that the author doesn’t write the code for reproducing his plots. I spent 15 minutes trying to find a function to create histplots by group.





“It works for t-tests, but it wouldn’t be meaningful for chi-square testsm F -tests or indeed for most of the tests I talk about in this book.”



“we see that it is 95% certain that the true (population-wide) average improvement would lie between 0.95% and 1.86%.”


This wording is dangerous because there are two interpretations of the percent sign. In the relative sense, they are wrong. The author means absolute %’s.



The code has +’s in it which means it cannot just be copied and runned. This usually isn’t the case, but it happens a few times in the book.



In the description of the test, we are told to tick when the values are larger than. However, in the one sample version, the author ticks when the value is equal to. I guess this means that we tick when it is equal to or larger than.



This command doesn’t work because the dataframe isn’t attached as the author assumes.

> mood.gain <- list( placebo, joyzepam, anxifree)



First the author says he wants to use the R^2 non-adjusted, but then in the text he uses the adjusted value.



Typo with “Unless” capitalized.



“(3.45 for drug and 0.92 for therapy),”

He must mean .47 for therapy. .92 is the number for residuals.



In the alternates hypothesis, the author uses “u_ij” instead of “u_rc” which is used in the null-hypothesis. I’m guessing the null-hypothesis is right.



As earlier, it is ambiguous when the author talks about increases in percent. It could be relative or absolute. Again in this case it is absolute. The author should use %point or something to avoid confusion.





“I find it amusing to note that the default in R is Type I and the default in SPSS is Type III (with Helmert contrasts). Neither of these appeals to me all that much. Relatedly, I find it depressing that almost nobody in the psychological literature ever bothers to report which Type of tests they ran, much less the order of variables (for Type I) or the contrasts used (for Type III). Often they don’t report what software they used either. The only way I can ever make any sense of what people typically report is to try to guess from auxiliary cues which software they were using, and to assume that they never changed the default settings. Please don’t do this… now that you know about these issues, make sure you indicate what software you used, and if you’re reporting ANOVA results for unbalanced data, then specify what Type of tests you ran, specify order information if you’ve done Type I tests and specify contrasts if you’ve done Type III tests. Or, even better, do hypotheses tests that correspond to things you really care about, and then report those!”


An exmaple of the necessity of open methods along with open data. Science must be reproducible. The best is to simply share the exact source code to the the analyses in a paper.

So i tried linux again

Usually every few years i try linux just to see how it has improved since last time i tried it. So far i have not migrated permanently to linux on my desktop. Simply, windows (7) is better for my purposes.

Whenever i try linux, i picked the most popular distro. This time it was Mint. Overview here. The reason to pick the most mainstream one is that it is the one likely to have the best driver support, least number of problems, most features, easiest support for programs and so on. Basically, im picking the best linux distro to compare with windows.

The first problem after installing was that i cud not make fully use of my dual screen setup. In windows i use the program UltraMon so that i can have a taskbar on the second screen as well. Very useful when one has lots of programs open. After googling it, this feature is apparently not avaiable in the default Cinnamon desktop. It’s been an open issue for 2 years.

So the solution was to install some other desktop environment. A few people mentioned that this cud be done in KDE. So then i tried installing KDE thru the standard Software Manager. However, it only worked halfway or so. Asking my linux expert roommate, he told me that SM is dumb and doesn’t install necesssary dependencies. Why would anyone make the default program so stupid? Anyway, i then did it with Synaptics (another Software Manager-ish program, also built in). I loaded over to KDE and it was possible to get a working taskbar on the second screen, altho not intuitive and kinda complicated (so complicated one needs a guide even if one is considered a computer expert). Hurray!

So, the next annoyance was to change date format and stuff, especially getting KDE to display a 24h system clock was difficult. But again with guides i managed it.

Then there was the very annoying thing that KDE opens stuff with 1 click instead of 2 clicks. This was easily solvable tho.

A larger pain is that linux still does not have a proper winamp alternative. None of the alternatives i have tried (>10) have a specific feature of library indexing that winamp has. If one has a huge library full of compilations, one will automatically have thousands of artists, most of them with only 1 or 2 tracks. All the other programs offer only alfabetic sorting of artist names. This is useless. What is needed is sorting by number of tracks by that artist, which winamp has. One cud run winamp thru Wine but it is silly that this feature is still missing after so many years.

There is ofc also the usual issue with gaming. Few games work well on linux. DOTA2 runs with unplayable 15-30 fps on linux. Using the same settings it runs with 60 on windows. Not strictly linux’s fault, but due to microsoft monopoly with directx, it is still a problem.

Another issue was that there were no useful hotkeys by default in KDE. No hotkey for minizing all windows. No hotkey for opening the applications launcer (start menu equivalent). Worse, one cud not set the WIN key for this purpose in KDE since it’s apparently purely a modifer (dead) key. In windows and Cinnamon, the WIN key is treated specially in that it can both be a modifier and a key in itself. Fortunately, there was a hack to fix this problem.

What linux needs

For linux to become decent for mainstream use, there are some obvious requirements. First one, it must never be necesssary for normal users to use the terminal or any other non-GUI app to do anything. Everything must be GUI. Linux is clearly not ready.

Some good things

Some good things i noticed. Booting is much faster. The system is lighter, especially important for my shitty laptop (which still runs linux and will continue to do so). Important working programs like R and LATEX works mostly fine. In general, Cinnamon is good. They really have to fix that obvious problem with using dual monitors effectively.

Ripping books from UMDL Text: Leta S. Hollingworth’s Gifted children, their nature and nurture

Due to this book repeatedly coming up in conversation regarding the super smart people, it seems to be worth reading. It is really old, and should obviously be out of copyright (thanks Disney!), however it possibly isn’t and in any case I couldn’t find a useful PDF.

I did however find the above. Now, it seems to lack a download all function, and it’s too much of a hassle to download all 398 pictures manually. They also lack OCR. So, I set out to write a python script to download them.

First had to find a function to actually download files. So far I had only read the page source of pages and worked with that. This time I needed to save a file (picture) repeatedly.

Googling gave me: urllib.urlretrieve

So far so good right? Must be easy to just do a for loop now and get it overwith.

Not entirely. Turns out that the pictures are only stored temporarily in the website cache if one visits the page associated with the picture. So I had to make the script load the page before getting the picture. Slows it down a bit, but not too much trouble.

Next problem: sometimes the picture wasn’t downloaded correctly for some reason. The filesize however was a useful proxy for this purpose. So had to find a way to get the filesize. Google gave me os.stat (import os). Problem solved.

So after doing that as well, some pictures were still not being downloaded correctly. Weird. After debugging, it turned out that some of the pictures were not .gif but .jpg files, and located in a slightly different place. So then I had to modify the code to work that out as well.

Finally worked out for all the pictures.

I OCR’d it with ABBYY FineReader (best OCR on the market AFAIK).



The python code is here:

Ripping threads from able2know forums

So, i thought that is a good idea to take a copy of all the interesting threads on varius forums, just in case they shut down. doing it manually is waste of time. so, i went coding and make a crawler. after spending a couple of hours, i have now made a crawler that reads lines from a txt file, and downloads pages into folders from that.

code is here: Forum

or here:

import urllib2
import re
import os

def isodd(num):
    return num & 1 and True or False

#open data about what to rip
file0 = open("Forum threads/threads.txt",'r')

#assign data to var
data0 = file0.readlines()

#hard var
number = 0

#skip odd numbers
for line in data0:

    #if it is even, then set first var, and continue
    if not isodd(number):
        outputfolder = line
        outputfolder = outputfolder[:-1]
        number = number+1

    #get thread url and remove last chars (linebreak +
    if isodd(number):
        threadurl = line
        threadurl = threadurl[:-2]
        number = number+1
        print "starting to crawl thread "+threadurl

    #create folder
    if not os.path.isdir("Forum threads/"+outputfolder):
        os.makedirs("Forum threads/"+outputfolder)

    #var introduction
    lastdata2 = "ijdsfijkds"
    lastdata = "kjdsfsa"

    #looping over all the pages
    for page in range(999):
        #range starts at 0, so +1 is needed
        response = urllib2.urlopen(threadurl+str(page+1))

        #assign the data to a var
        page_source =

        #used for detection of identical output
        #replace data in var2
        lastdata2 = lastdata

        #load new data into var1
        lastdata = page_source

        #check if they are identical
        if page>0:
            if lastdata == lastdata2:
                print "data identical, stopping loop"

        #alternative check, check len
        if page>0:
            if len(lastdata) == len(lastdata2):
                print "length identical, stopping loop"
                #used for detection of identical output

        #create a file for each page, and save the data in it
        output = open("Forum threads/"+outputfolder+"/"+str(page+1)+".html",'w')

        print "wrote page "+str(page+1)+" in "+outputfolder+"/"
        print "length of file is "+str(len(page_source))

Review of python book and some other thoughts

It was mentioned by TechDirt in their reporting on an absurd copyright case (so, pretty normal).



The strange choice of the author to use identation to mark borders between paragrafs when indents are very important in python. He cud just have used newlines to do that.



The code is not easily copyable. If one tries, one gets spaces between every character or so like this: >>>cho i c e = ’ham’. This seems to be due to the font used.



Sometimes the examples are not clearly enuf explained. For instance, elif is explained as an “optional condition”, which is not all that clear. Fortunately, this is not much of a problem if one has an IDE ready to test it. For the record, elif works as an alternative condition if the first one isnt true. Ex.:

a = 1

b = 2

c = 3

if a == 1:

print “a holds”

elif b == 2:

print “b holds and a doesnt”

elif c == c:

print “neither a or b holds, but c does”

>>a holds


a = 0

b = 2

c = 3

if a == 1:

print “a holds”

elif b == 2:

print “b holds and a doesnt”

elif c == c:

print “neither a or b holds, but c does”

>>b holds and a doesnt


a = 0

b = 4

c = 3

if a == 1:

print “a holds”

elif b == 2:

print “b holds and a doesnt”

elif c == c:

print “neither a or b holds, but c does”

>>neither a or b holds, but c does


Note how the order of the elif’s matter. An elif only activates when all the previous if’s and elif’s failed.



Python apparently does not understand how to add numbers and strings. So things like:

a = “string”

b = 1

print a+b

gives an error. It seems to me that one shud just have python autoconvert numbers to strings (using str function), just as python converts integers to floats when adding such two objects together. Perhaps this has changed in python 3ff. Im running 2.7.




Thoughts about Diablo 3

I was going to write some criticism of this game. It is not that bad, it’s a good game, but Blizzard made so many unforgivable (becus it’s Blizzard) mistakes, that it seriously questions the normal principle that Blizzard is a sign of good quality. Luckily for me, someone has already done this for me in a series of excellent videos. Watch them if u care about game studies, Diablo series and Blizzard.

Re: “How To Get Rid Of Unwanted Redirects In Google Search Results”

Link to article.

Ever been annoyed by the overly long links one gets when one tries to copy a link from Google? Here is an example:

These are very annoying and their function is just to allow Google to track and see which link you actually clicked. Fortunately, there is a way to get rid of them (at least if one is using Firefox which is my favorite browser).

So, after installing GreaseMoney and a suitable script (I tried one first that didn’t work), here is what happens when I copypasta the link again:

Analysis of top 200 (199) data from the Starcraft 2 European ladder

Two days ago, Blizzard posted some statistical data about the top 200 (really is 199) players in Europe. I did a bit of statistical analysis on that data and made some nice illustrations as seen below. The data is self-explanatory, but the explanation is unknown to me. It need not necessarily be the case that Terran is overpowered. It may be that players simply like playing it better than the other races.

Two ideas for torrent clients

Automatic addition of trackers

With the probable closure of The Pirate Bay (TPB) in the near future the torrent scene will be more decentralized. Especially if the TPB tracker goes down. Luckily there are new upcoming trackers to take its place. The problem is that they are not in many of the torrent files. Besides adding the trackers manually there are some ways to fix it. One way is to re-upload all torrents with more trackers. That doesn’t seem like a plausible method. Another way is the have an indexer site (like TPB, Mininova or BTjunkie) update all the torrents with more trackers. This doesn’t seem like a bad idea.

Another idea, and this is my idea for the client, is to have an option in the client to automatically add more trackers to all torrents loaded. That would be very useful for torrents that do not include any new public tracker such as Openbittorrent (OBT)1 or Publicbittorrent (PBT)2. There could be an editable list of trackers to be automatically added, or there could be function that shares all public trackers for all torrents though I’m unsure of what the consequences will be of that. Even more, one could have peers share public trackers. In that way any new public tracker would quickly spread. There is reason to be cautious, however, a third party, say RIAA, may set up such a public tracker just to sniff IPs.

Automatic force-seeding of torrents with no seeds

The goal is to keep rarer torrents alive easier. Currently I have to manually look through my torrent client for torrents that currently have no seeds and then force seed them so that the peers can finish their download.3 It should probably force seed for a set time. If it forced-seeded only until there were at least two seeds, then complications arise. Imagine that there are two people who are the only seeds and both their torrent clients have this feature. Both their clients would then force seed the torrent until they spot another seed, but then they can see each other and so they will keep bouncing on and off force seeding while the peers would not get much data. A mix of these two ideas may be a good idea.


3We’re presuming here for simplicity that if there is no seed, then there is not a complete file in the swarm. This is false in some cases.