I was looking for something else.. and found this instead… From here: able2know.org/topic/151812-1


I think a 9 year old killing himself is a good representation of suicide as a whole. Selfish,short-sighted, and always looking for an escape. Although it seems insensitive for me to say this, I think we just need to realize that those things are apart of deciding to end your life.


It is funny that people who stay alive because they want to, call people “selfish” who kill themselves because they want to. (It is like those who have children because they want children calling childless couples selfish for not having children because they don’t want to have children.)

And it is absurd to call a solution to all of life’s problems forever a “short-sighted” solution. The 9 year old could not possibly have come up with any other solution to his problems that would have been so complete and long lasting.


Eriksson, Anders, and Francisco Lacerda. “Charlatanry in forensic speech science: A problem to be taken seriously.International Journal of Speech, Language and the Law 14.2 (2007): 169-193.


I came across this study while doing research on false rape accusations… yes i get around. :P




In the peer-reviewed academic article “Charlatanry in forensic speech science”, the authors reviewed 50 years of lie detector research and came to the conclusion that there is no scientific evidence supporting that lie detectors actually work.[24] Lie detector manufacturer Nemesysco threatened to sue the academic publisher for libel resulting in removal of the article from online databases. In a letter to the publisher, Nemesysco’s lawyers wrote that the authors of the article could be sued for defamation if they wrote on the subject again.[25][26]




A lie detector which can reveal lie and deception in some automatic and perfectly

reliable way is an old idea we have often met with in science fiction books and comic

strips. This is all very well. It is when machines claimed to be lie detectors appear

in the context of criminal investigations or security applications that we need to be

concerned. In the present paper we will describe two types of ‘deception’ or ‘stress

detectors’ (euphemisms to refer to what quite clearly are known as ‘lie detectors’). Both

types of detection are claimed to be based on voice analysis but we found no scientific

evidence to support the manufacturers’ claims. Indeed, our review of scientific studies

will show that these machines perform at chance level when tested for reliability. Given

such results and the absence of scientific support for the underlying principles it is

justified to view the use of these machines as charlatanry and we argue that there

are serious ethical and security reasons to demand that responsible authorities and

institutions should not get involved in such practices.

keywords: lie detector, charlatanry, voice stress analysis, psychological

stress evaluator, microtremor, layered voice analysis, airport




To keep this distinction in mind has methodological implications. It seems

reasonable, from a methodological point of view, to begin by determining

the validity of a suggested method before it makes much sense to study its

reliability. If the method can be shown to lack validity altogether it will as a

consequence also be unreliable and carrying out a reliability test meaningless.

If the validity is not known it will be a ‘black box’ whose reliability, if any,

will remain unexplained. We must keep in mind, however, that validity and

reliability are not all or nothing concepts. A method may be valid to a degree

and reliability may range from very poor to almost perfect. At the far end of

the negative scale we find things like astrology. It would be a complete waste of

time to design experiments to determine how precisely horoscopes may predict

future events when we know that the validity of the method is non-existent. At

the positive end of the scale we find methods like DNA testing whose validity

is solidly supported by scientific evidence and whose reliability is extremely

high, albeit not perfect.


they got the implication wrong. the implication is (∀x)¬reliable(x)→¬valid(x). If a test does not give consistent results for the same stimuli, it cannot possibly be measuring anything. one can infer (contraposition) that (∀x)valid(x)→reliable(x). if a test can be used to predict something, its measurements cannot be random. random input cannot predict anything, not even random output. more technically, (∀x)(∀n)valid(x, n)→(∃m)reliable(x, m)∧m≥n. for any given test and any given number [0-1], if the test has a validity for n for predicting something, it has a reliability of m which is larger than or equal to n. the reliability puts the upper bound on the validity of a test for something.


In fact astrology has both low to zero reliability* and almost zero validity. for instance, if used in a cohort to predict the height of the person, it can work for the simple reason that children with certain star signs are taller than children with certain other star signs given that they are born the same year. this has nothing to do with astrology, and everything to do with some children having more time to grow than others. if one used a non-children cohert, astrology’s validity goes to 0 again for height prediction. at least, in modern societies where food intake is not dependent on the seasons. for societies where it is, astrology might have some validity, but not becus of planets having any effect on people, just becus the seasons have some effect on people.


that is, reliability in assigning star signs to people based on a description of their personality, or reliability in giving predictions for the future for any given star sign. for proof of this latter point, find a number of randomly chosen astrology sources, and compare their predictions for the future for any given star sign. they dont agree at all. hence, they as a group cannot possibly predict anything.



Is there anything we can do to prevent charlatanry in forensic speech


Charlatanry, fraud, prejudice and superstition have always been with us. If we

look back in history and compare with what we see today there is little that

gives us hope that progress in science will diminish the amount of supersti-

tious nonsense we see around us. Astrology, for example, seems to be more

popular than ever and totally unaffected by how many times astronomers

explain that it is complete nonsense. We are therefore somewhat pessimistic

about the possibility of efficiently removing charlatanry from forensic speech

science. But we hope that responsible authorities like the police and security

services will listen to scientifically trained experts in the field rather than to

smooth talking and wishful thinking from vendors of bogus lie detectors

and similar gadgets. That is probably where we should invest our efforts. We

must also take great care when we present our results so that the issue does

not appear as a scientific controversy, which it is not. No qualified speech

scientist believes in this nonsense so there is absolutely no controversy there,

and it is very important that this becomes clear. We have included sufficient

detail in this paper to provide the reader with useful arguments in the struggle

against charlatanry. We hope that the effort will not turn out to be totally

without effect.



Retaking ability tests in a selection setting implications for practice effects, training performance, and turnover

Found via rationalwiki.org/wiki/High_IQ_society




This field study investigated the effect of retaking identical selection tests on subsequent test scores

of 4,726 candidates for law enforcement positions. For both cognitive ability and oral communication

ability selection tests, candidates produced significant score increases between the 1st and 2nd and the

2nd and 3rd test administrations. Furthermore, the repeat testing relationships with posthire training

performance and turnover were examined in a sample of 1,515 candidates eventually selected into the

organization. As predicted from persistence and continuance commitment rationales, the number of tests

necessary to gain entry into the organization was positively associated with training performance and

negatively associated with turnover probability.



Although the coaching studies are informative, test practice

alone is the issue of interest in the present study. Kulik, Kulik, and

Bangert (1984) summarized early research on practice effects

using meta-analysis. The authors drew almost exclusively on stud-

ies with student populations to examine practice effects on aptitude

and achievement test scores. They reported that test score increases

in the second administration were larger when identical tests were

used (0.42 SD) than when parallel forms of the tests were used

(0.23 SD). The authors also found a significant positive relation-

ship between test takers’ ability and size of the practice effect, as

effect sizes over two identical tests were 0.80 SD, 0.40 SD,

and 0.17 SD for subjects of high, middle, and low ability, respec-

tively. Finally, multiple test repetitions resulted in larger practice

effects, with a 0.42-SD mean increase from the first to the second

administration of an identical test (19 studies), a 0.70-SD improve-

ment from the first to the third administration (6 studies), and

a 0.96-SD increase from the first to the fourth administration (5

studies). In the most recent research on practice effects, psychol-

ogists have examined intelligence testing from a clinical perspec-

tive. Studies of the Wechsler Adult Intelligence Scale—Revised

and numerous other neuropsychological measures indicate that

improved scores tend to occur with repeat administrations of most

measures (Rapport, Axelrod, et al., 1997; Rapport, Brines, Axel-

rod, & Theisen, 1997; Watson, Pasteur, Healy, & Hughes, 1994).


in other words, the mathew effect at work. if we let everybody prep for tests, the scores will become more UNEQUAL, not more equal. plainly, smart people get more out of practicing.




(from A natural history of negation)

i had been thinking about a similar idea. but these work fine as a beginning. a good hierarchy needs a lvl for approximate truth as well (like Newton’s laws), as well as actual truths. but also perhaps a dimension for the relevance of the information conveyed. a sentence can express a true proposition without that proposition being relevant the making of just about any real life decision. for instance, the true proposition expressed by “42489054329479823423 is larger than 37828234747” will in all likelihood never, ever be relevant for any decision. also one can cote that the relevance dimension only begins when there is actually some information conveyed, that is, it doesnt work before level 2 and beyond, as those below are meaningless pieces of language.

and things that are inconsistent can also be very useful, so its not clear how the falseness, approximate truth, and truth related to usefulness. but i think that they closer it is the truth, the more likely that it is useful. naive set theory is fine for working with many proofs, even if it is an inconsistent system.

Men and women are from Earth Examining the latent structure of gender.

Understanding Dimensions and Taxa
One reason why the underlying nature of gender differences has
been difficult to address is that although biological sex is clearly a
categorical variable, the variables commonly of interest to re-
searchers and laypersons alike tend to be dimensional (e.g., mas-
culinity, femininity, school achievement, depression, aggression),
varying along a continuum. The statement that men are more
aggressive than women, for example, implicitly assumes that there
is one group of people who are high in aggression (men) and
another group of people who are low in aggression (women). This
assumption treats an observed mean difference between men and
women as a special kind of category called a taxon. Examples of
taxa include animal species (gophers vs. chipmunks), certain phys-
ical illnesses (e.g., one either has meningitis or not), and biological

no it doesnt. “men are more aggressive than women” has what logicians call a missing quantifier, meaning that one has to infer it from context. in this case it is pretty clear that the meant quantifier is “usually” or “typically”, which makes this sentence equivalent in meaning with “the average aggressiveness of men is higher than the ditto of women”. another quantifier cud be “all”, but no one seriously thinks that all men are more aggressive than all women. there is a difference in the average. i think that most people agree with this.

Although gender differences on average are not under dispute,
the idea of consistently and inflexibly gender-typed individuals is.
That is, there are not two distinct genders, but instead there are
linear gradations of variables associated with sex, such as mascu-
linity or intimacy, all of which are continuous (like most social,
psychological, and individual difference variables). Thus, it will be
important to think of these variables as continuous dimensions that
people possess to some extent, and that may be related to sex,
among whatever other predictors there may be. Of course, the term
sex differences is still completely reasonable. In a dimensional
model, differences between men and women reflect all the causal
variables known to be associated with sex, including both nature
and nurture. But at least with regard to the kinds of variables
studied in this research, grouping into “male” and “female” cate-
gories indicates overlapping continuous distributions rather than
natural kinds.

they seem confused. it does not follow that genders are not distinct just becus they indicators of the genders are dimensional rather than taxonomic. altho one cud think of the personality of people as being on a continuum from archtypical male to archtypical female.

This research also adds further evidence to the current debate
about whether it is more profitable to focus this literature on
gender differences or gender similarities (Hyde, 2005). “The gen-
der similarities hypothesis states, instead, that males and females
are alike on most—but not all—psychological variables” (Hyde,
2005, p. 590). Our research shows, moreover, that even those
variables on which males and females are not alike may be
evidence of variations along a continuous dimension rather than
categorical, and as Hyde terms them, “overinflated claims of
gender differences” (Hyde, 2005, p. 590). Clearly, if differences
between men and women are conceptualized as variations along a
continuum, there is little reason to reify these differences with the
sorts of extremities typically mentioned. Instead, these differences
would be seen as reflecting all the influences that are brought to
bear on an individual’s growth, development, and experience, and
would be relatively amenable to modification.

no such thing follows. gender differences can be small with lots of overlapping variation and still be 100% genetic, and thus not changeable with the usual socialization tools.

If gender is dimensional, why do categorical stereotypes of men
and women persist in everyday life? Although our research does
not speak to this issue, several explanations seem relevant. One
reason is that people tend to think categorically (Medin, 1989), or
as Fiske (2010) put it, referring to both laypeople and researchers,
“we love dichotomies” (p. 689). People use easily accessible
categories to help organize the abundance of information that the
social world presents, a mental shortcut that has come to be known
as the “cognitive miser” hypothesis (Fiske & Taylor, 1991). Be-
cause sex is one of the most readily observed human traits, it forms
an easy and common basis for categorizing other persons. As a
result, because other qualities tend to be accommodated to acces-
sible categories, and because men and women do differ in myriad
ways, category-based generalizations maximize the difference be-
tween the sexes while minimizing differences within them (e.g.,
Fiske & Neuberg, 1990; Taylor et al., 1978). Furthermore, as
Krueger, Hasman, Acevedo, and Villano (2003) showed, it may be
rational to accentuate intergroup differences whenever these dif-
ferences are easy to learn, fairly accurate, and helpful for action.

there are patterns in experience and in nature, and one sign of intelligence is to spot those patterns and use them to make decisions. stereotypes are useful for this.

It may be fruitful to consider how our findings are bound to the
cultural and historical context within which the data were col-
lected. With a few exceptions, most of these data were collected
from young Americans in the last quarter of the 20th century. This
is a time and setting in which differences between men and women
were shrinking, reflecting societal, economic, and educational
circumstances that contributed to the increasing liberalization of
gender roles (Brooks & Bolzendahl, 2004). Indeed, it seems likely
that were we to examine new data sets collected in 2012, they
would, if anything, be even more likely to be dimensional. This
point suggests two important implications. First, to the extent that
our data sets are outdated, they should have been more likely to
reveal a taxonic structure (which they did not), making our support
for dimensionality more compelling. Second, if suitable data sets
can be found, historical comparisons of underlying structures may
prove revealing of the impact of societal trends.

some things are shrinking, others are apparently increasing with increasing HDI. see roseproject.no/?page_id=39

in a happy coincidence, i recently learned about www.okstereotype.me/, which is a site that guesses (stereotypes) about various things from one’s profile text on dating sites. they must have data that can create a bayesian probability distribution like those in the article.

From somone named <Anonymous> on G+ i saw a link to a blogpost from a doctor about the dangers of fouride. that sounded interesting but potentially nutty (conspiracy nuts like such ideas). from reading the discussion at varius sites it was still unclear what i shud believe. so i downloaded the actual study cited and read it. its a pretty decent correlational systematic review. causation can be difficult to establish here, but ther shud be som natural experiments that can be used. for instance, som areas currently have high levels of flouride in the water for natural reasons. we can test the children of those areas, and then fix the drinking water by lessening flouride levels to those used in western countries, so about 1mg/L. then wait som years, like 10, and test som other children. if flouride is causing lowered IQ scores, they shud hav gone up by now. apply som stats to get rid of any potential Flynn effect. shud be somwhat easy to make this experiment in developing countries.

Developmental Fluoride Neurotoxicity A Systematic Review and Meta-Analysis


Ba c k g r o u n d: Although fluoride may cause neurotoxicity in animal models and acute fluoride
poisoning causes neurotoxicity in adults, very little is known of its effects on children’s neuro­
oBj e c t i v e: We performed a systematic review and meta­analysis of published studies to investigate
the effects of increased fluoride exposure and delayed neurobehavioral development.
Me t h o d s: We searched the MEDLINE, EMBASE, Water Resources Abstracts, and TOXNET
databases through 2011 for eligible studies. We also searched the China National Knowledge
Infrastructure (CNKI) database, because many studies on fluoride neurotoxicity have been pub­
lished in Chinese journals only. In total, we identified 27 eligible epidemiological studies with high
and reference exposures, end points of IQ scores, or related cognitive function measures with means
and variances for the two exposure groups. Using random­effects models, we estimated the stan­
dardized mean difference between exposed and reference groups across all studies. We conducted
sensitivity analyses restricted to studies using the same outcome assessment and having drinking­
water fluoride as the only exposure. We performed the Cochran test for heterogeneity between stud­
ies, Begg’s funnel plot, and Egger test to assess publication bias, and conducted meta­regressions to
explore sources of variation in mean differences among the studies.
re s u l t s: The standardized weighted mean difference in IQ score between exposed and reference
populations was –0.45 (95% confidence interval: –0.56, –0.35) using a random­effects model.
Thus, children in high­fluoride areas had significantly lower IQ scores than those who lived in low­
fluoride areas. Subgroup and sensitivity analyses also indicated inverse associations, although the
substantial heterogeneity did not appear to decrease.
co n c l u s i o n s: The results support the possibility of an adverse effect of high fluoride exposure on
children’s neurodevelopment. Future research should include detailed individual­level information
on prenatal exposure, neurobehavioral performance, and covariates for adjustment.
ke y w o r d s: fluoride, intelligence, neurotoxicity. Environ Health Perspect 120:1362–1368
(2012).  dx.doi.org/10.1289/ehp.1104912 [Online 20 July 2012]