CHICAGO — Chicago Public Schools is monitoring students’ social media posts for signs they might engage in violence on campus or harm themselves so that school staff — and in some cases police — can intervene.
A Canada-based company the district hired started scouring public posts for threats and “cries for help” last month. District leaders say the program is key in efforts to prevent violence and self-harm as the district responds to an uptick in school shootings nationally and in the number of local students expressing suicidal thoughts.
The Chicago program – called “Supporting Our Students” – is part of a national trend, as more districts have started paying contractors for social media monitoring services in recent years.
So far, there is no independent research or other evidence showing that student social media monitoring programs are effective in preventing violence and self-harm. In a social media space full of fake accounts, anonymous posts, and grandstanding, civil rights advocates say they worry these programs might needlessly invade students’ privacy and feed into stubborn discipline disparities facing some students, especially Black boys and other students of color.
Cassie Creswell of the nonprofit Illinois Families for Public Schools says surveying students online could undermine the district’s most important tool for getting information that helps avert violence.
“Kids have to have good relationships and trust with adults in their school communities, and that’s how we prevent stuff from happening,” she said. “Spying on kids is not how we do that.”
But Jadine Chou, the district’s safety and security chief, said the new program is not meant to punish students, and the district will involve police only when staff believe a post signals an imminent threat to safety. Rather, district officials say, the program aims to help students amid a pandemic-era rise in youth mental health challenges, as social media has come to play a key role in fomenting conflict and bullying on campus.
“This is not about getting children in trouble,” Chou said in an interview with Chalkbeat. “This is about getting children support.”
Chicago Public Schools piloted a similar social media monitoring program in the mid-2010s. A University of Chicago study deemed the effort promising, with evidence that it helped reduce student misconduct incidents and suspensions. But data also showed Black and male students were more likely to get flagged for concerning behavior, and some advocates voiced concerns about the involvement of a Chicago Police Department school gang unit.
New social media program draws on pilot’s lessons
In 2020, Chicago Public Schools won a $750,000 grant from the U.S. Department of Justice to launch “Support Our Students,” through a federal program titled STOP School Violence.
It wasn’t until this April that the district put out a notice seeking proposals from companies to monitor student social media for “worrisome online behavior,” such as threats, suicidal ideation, and references to drugs, weapons, or gang activity. The document said the district was especially concerned about an almost 60% increase in reports of suicidal ideation in students over the previous school year, with about 300 such reports received since August 2021. It planned to spend $450,000 on the program over three years.
Two companies, Safer Schools Together and GoGuardian, submitted proposals. The district selected SST, a firm based in Canada, with U.S. headquarters in Washington state. Officials said the company offered both a lower cost and better methodology for flagging concerning online behavior. In August, the school board approved an agreement with SST for up to $161,400 over 13 months.
“Supporting Our Students” comes on the heels of a pre-pandemic student social monitoring pilot program called “Connect & Redirect to Respect,” which was also supported by a federal grant from the Department of Justice.
That program involved random keyword searches of public social media profiles. In some cases, officers with CPD’s Gang School Safety Team met with students at their schools to discuss troubling posts. Staff referred students to mentoring, after-school, and summer programs and other services.
A report by the University of Chicago’s Crime Lab compared outcomes for students at schools participating in the program — about two dozen elementary and high schools, most of them on the city’s West and South sides — and for those at a control group of schools with similar demographics. The study found students at participating schools were at a lower risk of becoming a shooting victim, though the difference was not statistically significant.
It did show these schools had significantly fewer misconduct incidents and suspensions and better attendance, while students were not any more likely to be arrested. Data in the report also shows that students flagged through the program were more likely to be Black and much more likely to be male than students referred for intervention by school staff.
The researchers interviewed teachers and administrators who said many conflicts at school start out on social media, and some educators more informally monitor posts in hopes of warding off trouble on campus.
But reporting by ProPublica and WBEZ highlighted concerns by advocates and experts who questioned the practice of pulling students into meetings with police officers based on information gleaned from a program that students and their families didn’t know about.
The new program will be different in some key ways, officials said. The district’s earlier pilot initially used a software to flag posts, but, Chou said, “The algorithm did not catch a lot of the situations we are most concerned for.”
SST’s proposal and the district’s contract with the company say it will use technology to scan posts, but Chou said it will ultimately rely on trained people to review them and flag any content as concerning.
Thanks to SST’s involvement, the district said in a statement, the collection of information off social media will operate “at arm’s length” from the district, ensuring that it collects only data relevant to school safety.
The company will also offer guidance to school safety teams on responding when it flags concerning posts. It will work closely with the district’s Office of Social and Emotional Learning and school-level behavioral health teams to intervene with students and engage their parents. The Chicago Police Department will play a more limited role this time, in keeping with a broader rethinking of the district’s relationship with police.
As part of a district initiative, a growing number of high schools have stopped stationing police officers on campus in the past two years and used the money for restorative justice and other programs. But there will be times police will have to be involved, Chou said.
“If there’s a gun in your video,” Chou said, “I’m going to need to pull in the police.”
Like the earlier program, only publicly posted information will be monitored and collected, and the district and company won’t “friend” or follow students. The contract with SST spells out some measurable goals for the program: decreasing serious infraction, suspensions, and expulsions by 10% each, and student arrests by 5%.
Chou says social media threats and bullying are top of mind for students and parents she’s spoken with about school safety. But the program will be one tool in a much broader district safety strategy, Chou said — with SST serving as “a partner in case something gets missed.”
“When we have strong relationships with students and families, that’s where we get our best information,” she said.
Software ‘spying’ is no substitute for student support
Arseny Acosta, a junior at DeVry University Advantage Academy and an advocate with youth group Good Kids Mad City, said many students want to take a more active part in safeguarding safety on campus and beyond. She pointed to the group’s key role in a Dyett High School for the Arts restorative justice program and a social media “peace pledge” it penned as part of its “Peacebook” anti-violence proposal.
But monitoring students’ social media feels invasive, she said. It could add to mistrust among Black and Latino students, who feel they are still held to a higher discipline standard, Acosta said.
“This idea will most likely backfire, and make students more distrustful of CPS,” Acosta said. “CPS should be empowering and employing their student youth to create safety networks.”
Some district officials have said that programs that monitor student social media for keywords produce an excess of “noisy data” that school staff have to sift through to find any credible threats, said Elizabeth Laird of the nonprofit Center for Democracy & Technology.
Though some companies have touted case studies or data they have collected, she said, “There is no independent research or data that shows this service works — that it’s an effective strategy to keep students safe.”
Some of those questions about effectiveness were rekindled by the May 24 school shooting in Uvalde, Texas. That district had a social media monitoring program in place, but it apparently did not flag threatening social media posts by the shooter.
Meanwhile, advocates are concerned that these programs might disproportionately zero in on certain student groups, including students of color and LGBT students, and chill students’ free expression online. Districts have not been transparent enough about these programs and how they work, Baird said. The federal government recently cautioned school districts and other entities about using digital surveillance software that might exacerbate racial and other disparities.
SST didn’t respond to an interview request, but on its site, the organization says it has “a proven record” of helping schools across North America reduce the risk of student violence. It says open-source social media posts it has flagged triggered “successful school/community interventions and full scale police investigations and prosecution.”
The company’s proposal to the district, obtained by Chalkbeat, offers relatively few specifics about these successes. The proposal says its Worrisome Online Behavior reports are “well-received by our clients,” and quotes a safety official with the Lynwood Unified School District in California who says the reports have been an “essential tool” in ensuring school safety. The company has provided the reports to 80 districts in the past two years, consulting with districts on 1,600 interventions, its proposal says.
Under its contract with Chicago Public Schools, SST is required to submit biweekly reports on the number of “worrisome online behaviors” it flagged, the number of students involved in them, and the number of students receiving interventions as a result.
But in response to a Freedom of Information Act request by Chalkbeat, the district said these reports will not be made public. They contain “highly sensitive student information and outcries,” the district said, and their release would constitute a “clearly unwarranted invasion of personal privacy” under FOIA and the state’s Student Online Personal Protection Act.
Creswell, with Illinois Families for Public Schools, worked on the latest version of that state law. She says she wants to know more about the specific circumstances under which information gleaned from the program will be shared with law enforcement. The district should be doing more to get word of the program to students and their families, and better explain how it will safeguard against racial and income disparities.
Edward Vogel of the Lucy Parsons Labs, a Chicago-based nonprofit effort to advocate for digital rights, said he believes the district’s interest in supporting students is genuine. But young people often engage in grandstanding on social media, and he questioned the wisdom of tasking people who have no connection to students or their school communities with reviewing posts.
“Social media is a tool that people in gangs use, but there are also lots of young people who say things on social media that are meaningless,” Vogel said. “It’s a murky area to use for assessing threats.”
Given that many student accounts are private or don’t use students’ real names, Vogel said, how will SST even go about finding the right accounts to monitor?
The district said that it’s paramount that the program not perpetuate racial disparities. SST staff have received implicit bias training “to ensure that this effort is not targeting any specific groups.”
Chou stressed the district is not turning over any student names to SST; the company searches instead for references to the district and its schools. Flagged accounts often don’t use students’ real names, so district and school staff work together to identify students who might need help, such as counseling, mentoring, and other support.
She said the program has already had “a small number of successes where we have been able to intervene and support students,” though she declined to share any additional details. Chou said the district might be able to share aggregate data on the program’s results that better protects student privacy after the effort has been in place longer.
Chou said the district has worked to get the word out about the program. The district has not sent emails or letters specifically about the program, but a back-to-school email from district CEO Pedro Martinez included a mention of it on a list of school safety measures.
Chou also briefly mentions the program in a video posted on a revamped Office of Safety and Security website, in which she implores students and families to alert their schools or the district if they come across threats or other troubling content online.
“I want everyone to know,” Chou told Chalkbeat about the monitoring program. “This is not a secret.”
Mila Koumpilova is Chalkbeat Chicago’s senior reporter covering Chicago Public Schools. Contact Mila at firstname.lastname@example.org.