You are currently viewing Should you get off Twitbookgram?

Should you get off Twitbookgram?

You might be familiar with stuff like this:

How heavy use of social media is linked to mental illness, The Economist, 2018

Looking over the chart a bit, we find that sad users use social media a lot more. And in recent years, indeed, sadness and loneliness have been increasing:

So it seems like the conclusion is easy, right. Due to their addictiveness (by design), people tend to overuse social media, and thus, society overall becomes a little more unhappy and lonely.

Like anything in social science was ever that easy! What if we add controls. Can studies using ‘lots of controls’ find a robust association between social media use and bad feels? The paper by Orben and Przybylski (2019) uses something like BMA just broader to reach this conclusion:

The widespread use of digital technologies by young people has spurred speculation that their regular use negatively impacts psychological well-being. Current empirical evidence supporting this idea is largely based on secondary analyses of large-scale social datasets. Though these datasets provide a valuable resource for highly powered investigations, their many variables and observations are often explored with an analytical flexibility that marks small effects as statistically significant, thereby leading to potential false positives and conflicting results. Here we address these methodological challenges by applying specification curve analysis (SCA) across three large-scale social datasets (total n = 355,358) to rigorously examine correlational evidence for the effects of digital technology on adolescents. The association we find between digital technology use and adolescent well-being is negative but small, explaining at most 0.4% of the variation in well-being. Taking the broader context of the data into account suggests that these effects are too small to warrant policy change.

OK, but then again, sometimes cross-sectional analysis cannot find evidence of causality, even when it is there. This can happen when you control for too much stuff (e.g. controlling for downstream effects, or mediators). Fortunately, it is possible to experimentally manipulate social media use of people and see what happens. Any good studies? There are, actually. I found 7 6 randomized controlled trials. Let’s go over them in chronological order:

Most people use Facebook on a daily basis; few are aware of the consequences. Based on a 1-week experiment with 1,095 participants in late 2015 in Denmark, this study provides causal evidence that Facebook use affects our well-being negatively. By comparing the treatment group (participants who took a break from Facebook) with the control group (participants who kept using Facebook), it was demonstrated that taking a break from Facebook has positive effects on the two dimensions of well-being: our life satisfaction increases and our emotions become more positive. Furthermore, it was demonstrated that these effects were significantly greater for heavy Facebook users, passive Facebook users, and users who tend to envy others on Facebook.

Results:

The effect sizes are plausible. The values in parens are the standard deviations, so we divide the ITT effect by the control group standard deviation, and this gets us: 0.26 d for life satisfaction, and 0.18 d for emotions scale. The p values look good, but we are worried about the randomization and selective drop-out. In general, a treatment might be somewhat uncomfortable, causing persons to drop out of the treatment arm. If depressive people drop out more, this will show up as a fake effect of the treatment. Anyway, here we have the opposite problem, the control group is oddly small! Author wrote:

To ensure ecological validity, the study was designed and conducted as a 1-week experiment. On the first day of the experiment, the participants (n = 1,095) answered the pretest (15-minute online questionnaire—questions were asked in Danish). They were then immediately assigned randomly to one of the following conditions:  Do not use Facebook in the following week (treatment group)  Keep using Facebook as usual in the following week (control group) On the last day of the experiment, the posttest was conducted (15-minute online questionnaire—more or less identical with the pretest questionnaire), which 888 participants completed (81 percent completion rate). The participants did not receive any compensation for taking part in the experiment. Instead, they were encouraged to follow their assigned treatment or control condition by reminding them that their participation was of significant value for the study. The pretest showed that the random assignment had successfully balanced the participants’ characteristics across the treatment group and the control group. After the experiment, a comparison of pretest and posttest data showed that the participant dropout in both groups did not differ from the remaining participants on any parameters. This fact supports a belief that the participant dropout did not impact the findings in the subsequent analyses.

I don’t see anything about the imbalance in the samples.

Anyway, we are pretty happy with this study, with realistic effect sizes, a large sample, and good p values. Let’s move on:

Online social media is now omnipresent in many people’s daily lives. Much research has been conducted on how and why we use social media, but little is known about the impact of social media abstinence. Therefore, we designed an ecological momentary intervention study using smartphones. Participants were instructed not to use social media for 7 days (4 days baseline, 7 days intervention, and 4 days postintervention; N = 152). We assessed affect (positive and negative), boredom, and craving thrice a day (time-contingent sampling), as well as social media usage frequency, usage duration, and social pressure to be on social media at the end of each day (7,000+ single assessments). We found withdrawal symptoms, such as significantly heightened craving (β = 0.10) and boredom (β = 0.12), as well as reduced positive and negative affect (only descriptively). Social pressure to be on social media was significantly heightened during social media abstinence (β = 0.19) and a substantial number of participants (59 percent) relapsed at least once during the intervention phase. We could not find any substantial rebound effect after the end of the intervention. Taken together, communicating through online social media is evidently such an integral part of everyday life that being without it leads to withdrawal symptoms (craving, boredom), relapses, and social pressure to get back on social media.

So wait, this is not actually a randomized trial. This is a within person study, but OK, what do they find anyway?

Since this a longitudinal study, people serve as their own control, which is good enough probably. This way you control for any stable factors for persons, but any factors that changed between the two weeks of study would have messed up results. I mean, if in the week of no social media, nothing extraordinary happens, and in the follow-up back-to-normal week, a meteor strikes earth and kills a billion people, then comparing the two weeks of results would be a very bad idea, despite it being the same people. OK, so anyway (no meteors), they find that … nothing on emotional effects (positive or negative effect), but people are somewhat more bored and crave social media for that week of absence.

Social media use has a weak, negative association with well-being in cross-sectional and longitudinal research, but this association in experimental studies is mixed. This investigation explores whether social media abstinence leads to improved daily well-being over four weeks of time. Community and undergraduate participants (N = 130) were randomly assigned to five experimental conditions: no change in social media use, and one week, two weeks, three weeks, and four weeks abstinence from social media (i.e., Facebook, Twitter, Instagram, Snapchat). All participants completed a daily diary measuring loneliness, well-being, and quality of day. Results showed no main effect of social media abstinence. The duration of abstinence was not associated with change in outcomes and order of abstinence did not explain variance in outcomes. Results are consistent with trivial effects detected in large cross-sectional research, and call into question the causal relationship between social media and well-being on the daily level.

They have a bunch of figures which all look like this:

Basically, they find nothing, and they looked at longer term abstinence as well as short term. They also have a complicated table of models to look at this data all combined:

Authors basically tell us that they find nothing, but looking at their models, 2 of the 3 Model 3 results do show a positive effect of treatment (abstinence), but only with p < .05. What are Model 3 anyway?

Model 1 examined whether the day of the study showed a linear trend in outcomes as the study progressed for all participants (Table 1). Model 2 included the experimental treatment term, which tested for the presence of a main effect difference in outcomes based on whether participants were abstaining from social media or allowed to use social media. Model 3 tested an interaction effect, which examined whether the nature of the change over time was moderated by the experimental condition (i.e., the dosage effect). The MLM framework allowed for both significance testing and for nested model comparisons to determine whether the addition of new parameter estimates improved model fit.

Across all three models, there was no statistical evidence suggesting that abstaining from social media led to a main effect difference in three psycho- social outcomes or different growth trajectories. That is, days when participants were free to use four types of social media and days when they abstained from using social media were indistinguishable in terms of end of day loneliness, affective well-being, and quality of day – each representing a primary component of subjective well-being (Diener et al., 2017).

So, we have a kind of reverse situation where authors actually obtained some p < .05, but then don’t talk about them. The effect sizes are actually similar to the 2016 Danish study, about 0.20 d. Seems maybe OK. Next study:

Recent research has shown that social media services create large consumer surplus. Despite their positive impact on economic welfare, concerns are raised about the negative association between social media usage and performance or well-being. However, causal empirical evidence is still scarce. To address this research gap, we conduct a randomized controlled trial among students in which we track participants’ digital activities over the course of three quarters of an academic year. In the experiment, we randomly allocate half of the sample to a treatment condition in which social media usage is restricted to a maximum of 10 minutes per day. We find that participants in the treatment group substitute social media for instant messaging and do not decrease their total time spent on digital devices. Contrary to findings from previous correlational studies, we do not find any impact of social media usage on well-being and academic success. Our results also suggest that antitrust authorities should consider instant messaging and social media services as direct competitors before approving acquisitions.

How many people? Why is this information hidden?

A total of 191 respondents completed the first survey. As is typical for longitudinal studies, some students dropped out over time such that 157 students completed survey 2, 144 survey 3, and 121 the final survey. The survey participation corresponds to the number of participants who reported digital activities using the software (see Table 2). The following results will be based on the sample that recorded activities for at least 30 days in block 1 and 2 and completed surveys 1, 2, and 3. We will analyze the post-treatment data from block 3 and survey 4 separately. From the 134 students who recorded activities in block 1 and 2, we were able to match 122 from all data sources, i.e., twelve students did not answer (one of) the surveys or did not follow courses in at least one of the blocks.

Authors note that drop-out was seemingly random. Do people follow the rules?

Yes, the black group does markedly reduce its use of social media in the treatment period, then re-bounces in the post-treatment follow-up. However, there were no effects of this on well-being and the other stuff they measured. Looks like this:

So, this is probably another null.

Use of the social platform Facebook belongs to daily life, but may impair subjective well-being. The present experimental study investigated the potential beneficial impact of reduction of daily Facebook use. Participants were Facebook users from Germany. While the experimental group (N = 140; Mage(SDage) = 24.15 (5.06)) reduced its Facebook use for 20 min daily for two weeks, the control group (N = 146; Mage(SDage) = 25.39 (6.69)) used Facebook as usual. Variables of Facebook use, life satisfaction, depressive symptoms, physical activity and smoking behavior were assessed via online surveys at five measurement time points (pre-measurement, day 0 = T1; between-measurement, day 7 = T2; post-measurement, day 15 = T3; follow-up 1, one month after post-measurement = T4; follow-up 2, three months after post-measurement = T5). The intervention reduced active and passive Facebook use, Facebook use intensity, and the level of Facebook Addiction Disorder. Life satisfaction significantly increased, and depressive symptoms significantly decreased. Moreover, frequency of physical activity such as jogging or cycling significantly increased, and number of daily smoked cigarettes decreased. Effects remained stable during follow-up (three months). Thus, less time spent on Facebook leads to more well-being and a healthier lifestyle.

For the two main outcomes: p values are .011 for life satisfaction, and for depression, p = .001. The first may be fluke/hacked, but the latter is not so. Anyway, the authors used some strange ANOVA models, which provide poor output, so I can’t really interpret. They say “in the [experimental group] life satisfaction continuously increased between T1 and T5. In the [control group], the change pattern of life satisfaction indicates a reversed u-shape (i.e., increase between T1 and T3, decrease almost to the initial level between T3 and T5).”. I mean, that is true, but that is not what we expect from a successful treatment! Why is the one group so low to begin with? If anything, it looks like control group benefited more. The depression plot shows some similar interpretation concerns. Weird results, not sure what to make of this.

Introduction Screen time apps that allow smartphone users to manage their screen time are assumed to combat negative effects of smartphone use. This study explores whether a social media restriction, implemented via screen time apps, has a positive effect on emotional well-being and sustained attention performance.
Methods A randomized controlled trial (N = 76) was performed, exploring whether a week-long 50% reduction in time spent on mobile Facebook, Instagram, Snapchat and YouTube is beneficial to attentional performance and well-being as compared to a 10% reduction.
Results Unexpectedly, several participants in the control group pro-actively reduced their screen time significantly beyond the intended 10%, dismantling our intended screen time manipulation. Hence, we analyzed both the effect of the original manipulation (i.e. treatment-as-intended), and the effect of participants’ relative reduction in screen time irrespective of their condition (i.e. treatment-as-is). Neither analyses revealed an effect on the outcome measures. We also found no support for a moderating role of self-control, impulsivity or Fear of Missing Out. Interestingly, across all participants behavioral performance on sustained attention tasks remained stable over time, while perceived attentional performance improved. Participants also self-reported a decrease in negative emotions, but no increase in positive emotions.
Conclusion We discuss the implications of our findings in light of recent debates about the impact of screen time and formulate suggestions for future research based on important limitations of the current study, revolving among others around appropriate control groups as well as the combined use of both subjective and objective (i.e., behavioral) measures.

Pretty small. The study also failed because:

The conducted t-tests with the manipulation as the independent variable and the relative reductions of screen time as dependent variables, showed that the manipulation had failed. The difference in screen time reduction between the conditions only reached significance for Instagram and overall social media screen time, and was generally not in line with the reductions that were aimed for (control −10%, experimental −50%; see Table 2). The manipulation failed mostly because participants in the control group reduced their social media app use on average with 38%, which was much more than the intended 10% (see Table 2). Moreover, an examination of the overall screen time revealed that there were no differences between the two conditions in terms of their overall average reduction in screen time, t(74) = − 0.28, p = .779 (see Table 2)6.

So this doesn’t really tell us anything when it was too small to begin with, and participants did not comply.

Finally, the best study! Yes, I cheated with the chronology a little bit to make this fit at the end:

Concerns about the consequences of social media use on well-being has led to the practice of taking a brief hiatus from social media platforms, a practice known as “digital detoxing.” These brief “digital detoxes” are becoming increasingly popular in the hope that the newly found time, previously spent on social media, would be used for other, theoretically more rewarding, activities. In this paper, we test this proposition. Participants in three preregistered field experiments (ntot = 600) were randomly assigned to receiving each of two conditions on each of two different days: a normal-use day or an abstinence day. Outcomes (social relatedness, positive and negative affect, day satisfaction) were measured on each of the two evenings of the study. Results did not show that abstaining from social media has positive effects on daily well-being (in terms of social relatedness, positive and negative affect, day satisfaction) as suggested by the extant literature. Participants reported similar well-being on days when they used social media and days when they did not. Evidence indicated that abstinence from social media had no measurable positive effect on well-being, and some models showed significant deficits in social relatedness and satisfaction with one’s day. We discuss implications of the study of social media hiatus and the value of programmatic research grounded in preregistered experimental designs.

This is nice in terms of sample size, and planned analyses. This is also the same person (Andrew K. Przybylski) who did the skeptical regression paper we quoted to begin with. But… I mean, this study look at only 1 day of abstinence! I feel like almost no one would think that has a notable effect size. So what does this clear demonstration of near-zero effect size tell us?


Conclusions

So what to do? Living is one ‘forced option‘ after another. Should you reduce social media use? Get rid of it all together? It’s hard to say, even after we went over 6 randomized trials. Humanity has been semi-addicted to social media for 15 years, and social science can’t tell us… whether this is good or bad? I wouldn’t personally restrict my children’s use of social media based on this research. The effect size is plausible (i.e., small, 0.2 d), and somewhat uncertain. The Danish study produced the most convincing evidence. The author appears to have gone out of academia immediately after doing it. Despite this, the study actually received tons of media exposure when it came out. Ideally, we would simply assign a few research groups to repeat that experiment in some difference places, then look at the results in combination.

I think the meta-take-away from this review is that social science has serious prioritization issues. Why do we have 1000+ useless studies on candidate genetics, and mostly fake GxE, ego depletion and all manners of other bullshit, or politically convenient falsehoods, AND YET we somehow don’t know whether the largest change to human social behavior in the last… ever has negative effects on mental health, despite us facing a seeming mental health crisis. It beggars belief. Where are the adults in academia?