Tag Archives: behavior

How to be active & anonymous online – a guide for women in religiously-conservative countries

In the world in which we all live, most people have to be online, regularly:

  • There is essential government and business information that can be accessed only online, or can be accessed most cheaply and easily online.
  • There is breaking news that can affect a person’s life or livelihood and, therefore, needs to be learned as close to real-time as possible – and that could happen only online.
  • There is information related to our work that is most quickly, easily accessed online.

And “online” includes using social media, such as Facebook and Twitter.

However, in many religiously-conservative communities around the world, women take a huge risk by being online, specifically in using social media. I explore this in a blog I wrote called virtue & reputation in the developing world. Because of threats to their reputation and safety, many women in religiously-conservative countries such as Afghanistan and Pakistan have given up on having a virtual identity at all – I personally know two such women, both professionals. This greatly hinders their ability to connect with potential colleagues abroad that could help them in their work, to build up a professional reputation beyond the walls of their office or beyond the staff of the organization, and to access information essential for their work and life.

There are some ways for women to develop an online profile on social media, including Facebook, that allows them to access essential information, to post information and to network with professionals in their field of expertise, but still protect identities online. Here are some guidelines:

Choose a first and last name you will use online only
These should be names that are different from your real names. However, also try to create a name that isn’t a real name for someone else. You can also use just an initial for your first name – one letter.

Create an email address for your anonymous profile
Gmail is a good choice. Use something that in no way involves your real name. Associate this with social media accounts, rather than your work or university email address.

Be vague online about your employer or university
On any social media site, such as Facebook, do not say the full, real name of your employer or the university where you currently attend. Identify yourself more vaguely, such as:

  • employee of an Afghan government ministry
  • assistant at a Egyptian dental office
  • nurse at a hospital in Kuwait
  • student at a university in Kabul

Be careful who you friend on Facebook.
Talk to people face-to-face that you trust and that know your real name if you want to friend them on Facebook, if you can, and tell them why it is so important that they keep your identity a secret if you link on social media. If you have an argument with that person, will he or she reveal your true identity online? You must friend only people who you can trust who know your real name, and those people need to understand that they must NOT tell others who you are online or make comments that would reveal who you are. When in doubt, don’t friend local people at all and just focus on international colleagues who fully understand your situation or do not know you offline at all.

Do not share photos of yourself where your face can be seen
You can share photos of yourself on social media where your identity cannot be determined. For instance, if you were standing with your back to the camera, and not wearing distinctive clothing. Or a photo of just your hands.

Do not share photos of family or friends
This could make it easier for people to figure out who you are.

Have a physical address that isn’t your home or workplace
Sometimes, to register on a particular web site, you must provide a physical address of either your home or work place. Pick a public place as the address you will use: a public library or a book store are good choices. Those places may end up getting paper mail addressed to your fake identity, and that’s okay: there is no way for this to be traced back to you and it won’t be mail you want. Never use your actual home, work place or university address for your anonymous profile.

Post status updates that do not indicate your identity
You can share memes and news stories (always verify them first and ensure they are true), write status updates about the weather, write your opinion of current affairs, or offer advice related to your country or your profession. But don’t write specifics, such as “I just attended a great class on the state of water and sanitation in Luxor”, as that’s too specific and could be used by someone who reads it to figure out who you are.

Be careful when commenting on the Facebook status updates of friends
If one of your colleagues posts a status update, and you comment that “I look forward to talking to you about this at the staff meeting on Monday at 4”, one of their other friends who is NOT your online friend may figure out who you are. Instead, you could say, “I look forward to talking to you about this soon.”

Never use this anonymous account from work
The risk is too great of someone seeing your screen, or your walking away from your desktop and someone using the “back” button to scroll through the screens you have visited and find that you forgot to log out of Facebook – they will be able to see your anonymous profile as a result.

Be careful about posting in online discussion groups
There are online discussion groups regarding topics related to your work. By all means, join such a forum and read the posts. But be careful about posting, including replying to others. When you post, you reveal your IP address. This will NOT reveal your name, your home address, your age, etc. But your IP address may reveal where you work IF you are accessing the group from your workplace’s Internet connection and if that connection is configured a certain way.

Practice denying your online activities
People are going to ask you if you are on Facebook or Twitter. Practice saying no. Also practice your response to someone who says, “Is so-and-so on Facebook really you?”

If someone you do not know starts messaging your fake account, be careful about engaging with them. If they are asking “Who are you?” or “Why did you say that?”, ignore them. If they are asking how you know a shared friend, ignore them. If they become insulting, block them. If they say they are a reporter and they saw your post somewhere and would like to interview you, ask them what newspaper or TV station they work for, ask for their full name, and then look up that organization online and call them and ask if that person works there. In other words, make absolutely sure it’s a REAL journalist that is asking you questions!

If anyone threatens you online, screen capture those messages and save them. If anyone threatens you online with physical harm in any way and you believe that person could figure out who you are, it may be best for you to block them and delete your account. Your safety is always paramount and you should do what you need to do to stay safe.

Why am I not recommending that a person contact the company that operates the platform or social media site to report harassment, or to contact local police department? That is certainly an option if you live in a country that has rule of law. However, if you live in a developing country or a country that has laws that censor Internet access, such reporting could actually put you in danger. Even so, hold on to your screen captures of threatening messages and share them with a person you trust if you feel they represent a real threat to you or your family.

Also see:

Orange Day: UNiTE to End Violence Against Women campaign

The United Nations Secretary-General’s UNiTE to End Violence against Women campaign, managed by UN Women, has proclaimed every 25th of the month as “Orange Day” – a day to take action to raise awareness and prevent violence against women and girls. Orange Day calls upon activists, governments and UN partners to mobilize people and highlight issues relevant to preventing and ending violence against women and girls, not only once a year, on 25 November (International Day for the Elimination of Violence against Women), but every month.

Orange Day 2017 action themes so far:

February: Violence Against Women and Girls and Women’s Economic Empowerment

March: Violence against Women and Girls with Disabilities

April: Violence against indigenous women and girls

May: Mobilizing resources to end violence against women and girls

June: Violence against women and girl refugees

July’s theme was Cyber violence against women. The official statement from UNiTe notes: “Although children have long been exposed to violence and exploitation, ICTs have changed the scale, form, impact and opportunity for the abuse of children everywhere. While both girls and boys are vulnerable to the different risks and harms related to the misuse of ICTs, girls have been disproportionately victimized in sexual abuse and exploitation through the production and distribution of child sexual abuse materials. In 2013, 81 per cent of child sexual abuse materials depicted girls. Girls are also particularly vulnerable to being groomed online for sexual encounters and sometimes exploited through live streaming of their sexual abuse. Many children are experiencing widespread victimization through online bullying, harassment, and intimidation, where girls are particularly targeted due to gender norms and power dynamics. Gender discrimination, lack of confidence, difficulty with language, poverty, and cultural factors can adversely affect girls and lead to their heightened vulnerability to these crimes and victimization.” SDG 5 of the Sustainable Development Goals (SDGs) is focused on Gender Equality, and places women’s access to technological empowerment as one of the core indicators for progress. “To achieve this goal, we must make sure that the internet will be a safe and more secure place that allows all women and girls to fulfill their potential as valued members of society and live a life free from violence.”

UNiTE has curated several resources related to such:

  • The Broadband Commission Working Group on Digital Gender Divide recently published a set of recommendations that specifically addresses threats aimed both at promoting better understanding and awareness of the ways in which women experience threats, and ensuring that stakeholders help to make the Internet and its use safer for women (page 32). Proposed actions include researching and understanding threats, increasing awareness of threats and how they can be addressed or reduced, developing safety applications and services and strengthening protection measures and reporting procedures.
  • The “Perils and Possibilities: Growing up Online” report, recently published by UNICEF, provides a glimpse into young people’s opinions and perspectives on the risks they face coming of age in a digital world.UNICEF is collaborating with companies, governments and civil society to promote children’s rights related to the Internet and associated technologies. Take a look at their online depository of new business tools and guidance on child online protection which among others includes useful resources, learning materials, and tools for companies.
  • UNICEF is collaborating with companies, governments and civil society to promote children’s rights related to the Internet and associated technologies. Take a look at UNICEF’s online depository of new business tools and guidance on child online protection which among others includes useful resources, learning materials, and tools for companies.
  • The Guidelines for Child Online Protection, prepared by ITU, outline best practices and key recommendations for different interest groups, including policy makers, industry, children, as well as parents, guardians, and educators. More resources on Child Online Protection from ITU’s database.
  • INHOPE is an active and collaborative global network of Hotlines, dealing with illegal online content and committed to stamping out child sexual abuse from the Internet. The network offers a way of anonymously reporting Internet material including child sexual abuse material they suspect to be illegal.
  • Launched in January, HeartMob is a project of Hollaback!, a non-profit organization powered by a global network of local activists who are dedicated to ending harassment in public spaces. The platform provides real-time support to individuals experiencing online harassment and empowers bystanders to act.

It’s also worth reading Women’s Rights Online, a report from 2015 from the Web Foundation that shows that the dramatic spread of mobile phones is not enough to get women online, or to achieve empowerment of women through technology. The study, based on a survey of thousands of poor urban men and women across nine developing countries, found that while nearly all women and men own a mobile phone, women are still nearly 50% less likely to access the Internet than men in the same communities, with Internet use reported by just 37% of women surveyed (vs 59% of men). Once online, women are 30-50% less likely than men to use the Internet to increase their income or participate in public life. The report says young people are most likely to have suffered harassment online, with over six in 10 women and men aged 18 – 24 saying they had suffered online abuse. The Web Foundation was established by Web inventor Sir Tim Berners-Lee

Also see:

How to change minds

I’m a part of the March for Science Facebook group, for people that were in the Marches for Science all across the USA on April 2017 or that supported such. A lot of the talk on the group has been about science education and public relations. There are individuals and communities all over the USA – and the world – fighting against science-based decision making in public policies and science education in schools, and many on the group feel this is because of poor wording and poor outreach by scientists and those that support science regarding public relations. In my ongoing quest to be a better communicator, I’ve watched these discussions closely.

Recently, someone posted the following regarding how we communicate about science. I think it’s a great testimony regarding what works, and what doesn’t, regarding swaying public opinion, changing people’s minds and fighting misinformation. I’m sharing it here, with her permission, but without her name to protect her identity:

I’m not a scientist. I’m not afraid of science but I also don’t have a strong grasp of most science related jargon. I joined this group along with a few other science groups/pages as I heard more and more of anti-science rhetoric from our govt. Allthough I don’t understand a lot of scientific things that doesn’t mean I don’t realize the importance of science for our society and for our future.

I have learned SO MUCH from reading posts and comments. The reason I have learned so much? The reason I am no longer “afraid” of GMO’s? The reason I have changed my mind on other popular misconceptions? Because my fear was never the science. My fear was that I didn’t know what information to trust. Money talks. It’s hard to figure out who is paying. Do I trust a science study that was paid for by a big corporation? Do I trust a study that’s published but not peer reviewed? WHO do you trust?

The common thread I’ve found as I read posts and comments in order to learn more is how stupid I am. How dumb was I to not trust GMO’s. People’s comments were blatantly MEAN. And sure, I was completely uneducated about GMO’s. I read the wrong information. I trusted the wrong sources. But again, without hours of research to find out funding sources, etc HOW do I know what to trust?

This question was amazing. I always want to learn more. I want to understand about so many things – to give my kids the best future possible. The best food to eat. The best meds for my asthmatic child. The best environment for them to grow up in, etc. But here’s the thing. If I wasn’t determined to do the best for my kids . . . by the 100th ridiculing comment on a post I found interesting I would have stopped following and learning. Heck by the 20th I would have written off these sciences pages.

Even in this thread there are those using terms like “stupid,” “brainwashing,” etc. Very derogatory terms and grouping all people who don’t have a knack for science into one realm. I have a great head for business, finances and can analyze the heck out of any non-technical literature. I don’t make fun or ridicule those people who don’t have have that ability. It accomplishes nothing.

So thank you to those of you who answered this post thoughtfully. I’m certain there are many of you who diligently try over and over again to get your point across. Don’t give up. Changing peoples’ minds is never easy but in this case it’s worth the fight.

—end quoted text—

Also see:

Behavioural Insights at the United Nations – Achieving the 2030 Agenda

The United Nations has embraced the use of behavioral science to help it craft effective development activities and interventions. As it notes on this November 2016 blog:

Across the globe, all people – poor or rich – sometimes make choices that are not conducive to their own well-being. Saving enough for retirement, eating healthy, investing in education – all too often we humans postpone intended actions to ‘tomorrow’, succumb to inertia or get stuck in habits.

In light of the extensive research on the cognitive biases that influence human decision-making, there is a broad consensus that traditional economic models are insufficient for effective policy-making. Behind every policy lie assumptions about how humans will behave in light of new regulations and why we act the way we do.

UNDP has embraced the idea of network nudges, where people are influenced by the behavior of friends and members of their extended social network, and that people observe other people’s behavior as guidelines for what’s acceptable and desirable. UNDP has been cooperating with the UK Behavioural Insights Team since 2013, and UNDP’s report, Behavioural Insights at the United Nations – Achieving the 2030 Agenda, advocates this approach for inclusion in every policy maker’s toolbox and presents 10 valuable case studies. This is from the page at the aforementioned link:

In 2016, the UNDP Innovation Facility collaborated with the newly engaged UN Behavioural Science Advisor to work on behaviorally-informed design with 8 UNDP Country Offices in all 5 regions: Bangladesh, Cameroon, China, Ecuador, Jordan, Moldova, Montenegro and Papua New Guinea. This Progress Report highlights the potential of behavioural insights to help achieve the Sustainable Development Goals and provides an overview of the 8 initiatives.

Behavioural insights draw from research findings from psychology, economics and neuroscience. These insights about how people make decisions matter for development. They matter for policy-formulation and addressing last mile problems.

UN Secretary General Ban Ki-moon noted that, “In order to succeed, Agenda 2030 must account for behavioural insights research… Our organization, our global agenda – and most importantly the people worldwide they are intended to serve – deserve nothing less than the best science available. A human-centered agenda requires a rigorous, research-based understanding of people.”

The report shows that approaching development challenges with behavioural insights leads to better diagnoses of problems and to better designed solutions. Public policy and programme officials around the world can achieve better outcomes — often at low or no cost — simply by leveraging our current understanding of human psychology and behaviour.

In January 2016, the UN Secretary-General appointed two “Behavioural Insights Advisors” for initially six months. They worked with the UNDP Innovation Facility to improve uptake of an e-waste recycling solution in China, crowdfunding efforts for green energy in Ecuador, the anti-corruption initiative ‘Phones Against Corruption’ in Papua New-Guinea, and more.

Wikipedia actually has some good pages that provide an overview of these and related subjects:

And here are some of my own resources on these and related subjects:

ICTs to reach & educate at-risk communities

Apps, social media, text messaging/SMS and other information and communication technologies (ICTs) are already playing a crucial role in educating people regarding public health issues, reaching marginalized communities and helping those that may be targets of harassment and discrimination. But in all of these tech4good initiatives, the importance of safety and security for those doing the outreach and those in the target audience is critical. People trying to promote a tech4good initiative do not want the technology to be used by hostile parties to identify, track and target people based on their health, lifestyle or beliefs.

For those interested in using ICTs to reach marginalized communities, or those interested in how to communicate vital information about topics that are frowned-upon in religiously conservative communities, the new publication Pioneering HIV services for and with men having sex with men in MENA: A case study about empowering and increasing access to quality HIV prevention, care and support to MSM in a hostile environment, is well worth your time to read. The United States Agency for International Development (USAID) funded this project, and the 48-page publication was produced by the International HIV/AIDS Alliance and co-authored by Tania Kisserli, Nathalie Likhite and Manuel Couffignal. The publication includes two pages on how ICTs help to reach hidden communities threatened by police raids and rising homophobia in the MENA (Middle East and North Africa) region – for instance, how applications such as Grindr that are frequently accessed by men having sex with men (MSM) in the MENA region and provide virtual venues for disseminating information on HIV prevention, treatment and support services.”

The publication includes two pages on how ICTs help to reach hidden communities threatened by police raids and rising homophobia in the MENA (Middle East and North Africa) region – for instance, how applications such as Grindr that are frequently accessed by men having sex with men (MSM) in the MENA region and provide virtual venues for disseminating information on HIV prevention, treatment and support services.”

This is from the report (note that this is with British spellings):

In 2015, the partners of the MENA programme implemented a pilot online peer outreach project to reach more MSM, in partnership with the South East Asian Foundation B-Change Technology.

In order to improve the understanding of the online habits and behaviours of MSM, two anonymous web surveys were launched online to collect information among MSM (living in Algeria, Lebanon, Morocco and Tunisia), recruited via Facebook and instant messaging channels. The first survey assessed technology use and included questions about mobile devices and tech-based sexual networking. The second survey collected further data on social media behaviours, with questions about using social networks, interpersonal communications, and negative experiences online. The results confirmed the penetration of internet and mobile technologies in urban centres, and highlighted the widespread use by MSM of mainstream social networks (predominantly Facebook) and global gay dating apps, especially in the evening. The predominant website for sexual networking was reported to be Planet Romeo; the predominant smartphone app for sexual networking was Grindr. The results also revealed that while MSM use smartphone instant messaging (SMS and Whatsapp mainly) to communicate and chat with friends, they tend to use the telephone when communicating with health providers. Sexual networking among this cohort demonstrated a preference for web-based methods versus offline (public space) networking. A significant proportion of negative experiences using social media or apps was also reported, in particular cases of breach of confidentiality online.

Based on these findings, the partners designed a pilot information and communications technology (ICT)-based intervention. Experienced peer educators created avatars representing different profiles of beneficiaries, collectively designed an online peer outreach intervention and developed the corresponding standard operating procedures and M&E framework. This was identified as the most feasible output based on existing resources and ICT experience. Building the capacity of community groups for this intervention would result in more effective use of popular social media platforms for MSM-peer outreach activities. Local trainings of ‘online peer educators’ were organised to strengthen digital security, content creation systems, online outreach procedures, conduct of peer educators online, and M&E framework to measure the outcomes towards the HIV continuum of care.

The trained ‘online peer educators’ created ‘virtual peer educators’ accounts/profiles and contacted MSM though internet and social media in their respective countries, mainly on Facebook, Whatsapp, Grindr, Hornet, Planet Romeo, Badoo, Tango and Babel, and mostly during evening and night shifts. The objective was to contact MSM not reached by the usual outreach in public spaces, and hence continue expanding the package of prevention services available to MSM. They provided interpersonal communications on HIV and STIs, disseminated IEC materials online, encouraged them to take an HIV test and referred them to prevention services provided by the partner organisations, as well as public health services in their country.

This test phase lasted from July to September 2015 in Agadir, Beirut, Tunis and Sousse. The results were promising; during the month of September 2015, the six online peer educators of ASCS in Agadir for instance reached 546 MSM via chat rooms, websites, apps and instant messaging. They referred 148 MSM for an HIV test and 86 MSM for an STI consultation. During this period ASCS noticed an increase of number of MSM visiting the association to collect condoms and lubricant; ASCS peer educators appreciated this new type of outreach work compared to street outreach, the latter being uneasy due to growing harassment of police. Some challenges that peer educators faced online were similar to ‘traditional’ or face-to-face outreach work: high interest in sexual health, initially reluctance to visit association or uptake services, or to change risk behaviour.

“The virtual prevention pilot project has allowed us to reach a significant number of MSM, in particular those who remain hidden and aren’t reached through our outreach activities in the streets.” — peer educator and university student in Morocco

Some of the lessons learned from this pilot project:

  • Overall high acceptability: many MSM are eager to engage in an online conversation about HIV and STI prevention, rights and services; virtual spaces are perceived as safe to talk freely about sexual practices with no face-to-face bias; however, a significant proportion of MSM contacted online refused any discussion relating to sexual health and HIV.
  • Strong operational procedures and human resource capacity are required to maintain a high quality ICT tool that maintains privacy and confidentiality; consequently, organisational ICT capacity needs to be assessed and strengthened before initiating an online prevention project.
  • Monitoring and evaluation challenges: it is not easy to measure service use or user engagement online or to clearly show the link between use of ICT and uptake of services; monitoring of referral pathways between outreach CSOs and friendly providers needs to be aligned to track referral from virtual spaces to services.

One thing I do wonder: were any of these people involve volunteers?

Also see:

Folklore, Rumors & Misinformation Campaigns Interfering with Humanitarian Efforts & Government Initiatives

gossipUPDATED:

Preventing Folklore, Rumors, Urban Myths & Organized Misinformation Campaigns From Interfering with Development & Aid/Relief Efforts & Government Initiatives

Folklore, rumors and contemporary myths / legends often interfere with development aid activities and government initiatives, including public health programs – even bringing such to a grinding halt. They create ongoing misunderstandings and mistrust, prevent people from seeking help, encourage people to engage in unhealthy and even dangerous practices, and have even lead to mobs of people attacking someone or others because of something they heard from a friend of a friend of a friend. With social media like Twitter and Facebook, as well as simple text messaging among cell phones, spreading misinformation is easier than ever.

Added to the mix: fake news sites set up specifically to mislead people, as well as crowdsourced efforts by professional online provocateurs and automated troll bots pumping out thousands of comments, countering misinformation efforts has to be a priority for aid and development organizations, as well as government agencies.

Since 2004, I have been gathering and sharing both examples of this phenomena, and recommendations on preventing folklore, rumors and urban myths from interfering with development and aid/relief efforts and government initiatives. I’ve recently updated this information with new information regarding countering organized misinformation campaigns.

Anyone working in development or relief efforts, or working in government organizations, needs to be aware of the power of rumor and myth-sharing, and be prepared to prevent and to counter such. This page is an effort to help those workers:

  • cultivate trust in the community through communications, thereby creating an environment less susceptible to rumor-baiting
  • quickly identify rumors and misinformation campaigns that have the potential to derail humanitarian aid and development efforts
  • quickly respond to rumors and misinformation campaigns that could derail or are interfering with humanitarian aid and development efforts

And, FYI: I do this entirely on my own, as a volunteer, with no funding from anyone. I update the information as my free time allows.

Also see:

fake news, folklore & friendships

gossipIt wasn’t getting a journalism degree, or being a journalist, that made me a skeptic when it comes to sensational stories. It was a folklore class. Urban Folklore 371, to be exact. It was a very popular class at Western Kentucky University back in the late 1980s, both for people getting a degree in folklore studies and for people needing humanities courses for whatever their degree program was, like me. Class studies focused on contemporary, largely non-religious-based legends, customs and beliefs in the USA. One class might focus on watching a film about the games kids play on a playground and how those games explore the things they fear – marriage, childbirth, stranger danger, being ostracized by their peers, etc. Another class might review the difference versions of the “vanishing hitchhiker” story and why such stories are so popular in so many different cultures, and how the story changes over time.

I heard at least one student say, “That’s not a true story?! I always thought it was!” at least once in every class. Because of that class, I realized there were legends being told as truth all around me, by friends, by family, even by newspapers. “I heard it from my cousin” or “My friend saw it in a newspaper” or “My Mom saw it on Oprah” was usually the preface to some outlandish story told as fact. But the class taught me that, in fact, no woman was ever killed by spiders nesting in her elaborate hairdo, that there has never been a killer with a hook for a hand that attacked a couple in a parked car in a nearby town, that there is no actor who has never had a gerbil removed from his anus, and on and on and on.

I became the “um – that’s not true” girl at various places where I worked. And then via email. And I still am, now on social media. And what I have learned from being little Ms. Debunker is that people REALLY do NOT like these stories debunked. In fact, pointing out the facts that prove these stories aren’t true, no matter how gently I try to do it, often makes people very angry.

Back in the 1990s, a friend sent me yet another forwarded email. This time, the text said the email was from Microsoft Founder Bill Gates, that he’d written a program that would trace everyone to whom the email message was sent, and that he was beta testing the program. The email encouraged people to forward the message and said that if it reaches 1,000 people, everyone on the list would receive $1,000. Of course, it wasn’t true – I knew it as soon as I saw it. She’d sent me several of these type of emails – one that said people that forwarded the message would get a free trip to Disney World, another said we’d all get free computers, and on and on. I had been deleting them, but I was tired of it. So I looked online, found a site that debunked the myth, and sent her the link. I didn’t make any judgement statements; I just said, “This is a myth. Here’s more info. You might want to let everyone know you sent to, as well as the person you got it from,” or something similar.

She was not happy with me. In fact, it almost ended our friendship. She told me that the Internet was “a place for having fun” and “you can’t win if you don’t play” and what did she have to lose by forwarding the message even if it sounded fishy?

And that kind of reaction kept happening. Three new friends I made back in 2010, after I’d moved back to the USA, all unfriended me on Facebook the same day, outraged that I pointed out several things they were posting as their status updates – about how Facebook was going to start charging users, about how putting up a disclaimer on your Facebook page would stop the company from being able to sell your information, and on and on – were all urban legends, all untrue. Their reaction was almost verbatim of what that friend via email had said: Facebook is “a place for having fun” and “it’s better to be safe and share it” and what did they have to lose by sharing the message even if it sounded fishy? Also, they said they did not have time to “check every single thing online.”

Now, in 2016, I have friends that are furious with me for posting science-based web sites that debunk their posts from quack sites like the “Food Babe” claiming that GMOs cause cancer or that vaccines cause autism (to be clear, these are MYTHS). Two journalists – JOURNALISTS – were mad at me when I pointed out that a status update one had shared – it urged users to use the Facebook check-in function to say they were at Standing Rock in North Dakota, that this would somehow prevent the Morton County Sheriff’s Department there from geotargeting DAPL protesters – was promoting false information. I wasn’t just annoyed by the message – I found it imprudent, and yet another example of slackervism or slacktivism: people truly wishing to assist the protesters were checking in on Facebook rather than doing something that would REALLY make a difference, like sending funds to support the protest efforts or writing their Congressional representatives in support of the protesters. It also misdirects people from the nefarious ways law enforcement really does surveil people on social media. I would have thought journalists would know better than engage in such behavior.

Contemporary legends online cause harm, and it’s bothered me long before the Standing Rock/Facebook book check-in myth. Since 2004, I have been gathering and sharing examples of how rumors and urban / contemporary myths often interfere with relief and development activities, and government initiatives, including public health initiatives — even bringing such to a grinding halt. These myths create ongoing misunderstandings among communities and cultures, prevent people from seeking help, encourage people to engage in unhealthy and even dangerous practices, cultivate mistrust of people and institutions, and have even lead to mobs of people attacking someone or others for no reason other than something they heard from a friend of a friend of a friend. With the advent of social media like Twitter and Facebook, as well as just text messaging among cell phones, spreading misinformation is easier than ever.

Based on my experience as a researcher and a communications practitioner, and everything I’ve read – and I read a LOT on this subject – rumors that interfere with development and aid/relief efforts and government health initiatives come from:

  • misinterpretations of what a person or community is seeing, hearing or experiencing,
  • from previous community experiences or their cultural beliefs,
  • willful misrepresentation by people who, for whatever reason, want to derail a development or relief activity,
  • unintentional but inappropriate or hard-to-understand words or actions by a communicator, or
  • the desire of an individual or community to believe an alternative narrative, a desire that is stronger than the facts

That list of bullet points was central to the long list I made of recommendations on preventing folklore, rumors and urban myths from interfering with such initiatives. I made that list to help aid workers, particularly people leading public health initiatives. For years, I’ve updated that list and felt really good about it being comprehensive and realistic, and I’ve employed some of the methods myself in my work.

But are these recommendations enough anymore? I’m not sure. Because BuzzFeed reported that fake news stories about the USA Presidential election this year generated more engagement on Facebook than the top election stories from 19 major news outlets COMBINED – that included major news outlets such as The New York Times, The Washington Post, CNN, and NBC News, and on and on. And a new study from Stanford researchers evaluated students’ ability to assess information sources, and described the results as “dismaying,” “bleak” and a “threat to democracy,” as reported by NPR News. Researchers said students displayed a “stunning and dismaying consistency” in their responses, getting duped again and again. The researchers weren’t looking for high-level analysis of data but just a “reasonable bar” of, for instance, telling fake accounts from real ones, activist groups from neutral sources and ads from articles. And the students failed. Miserably. And then there’s my own experience seeing the reaction a lot of people have to references to sites like snopes.com or truthorfiction.com or hoax-slayer.com or the Pulitzer Prize-winning site Politico that debunk myths; those people claim that “These sites aren’t true. They’re biased.” And that’s that – just a simple dismissal, so they can continue to cling to falsehoods.

National Public Radio did a story a few days ago about a man in Los Angeles who decided to build fake news sites that publish outrageous, blatantly false stories that promote stories that extreme far-right groups in the USA (also known as “alt-right”) would love to believe; he thought that when these stories were picked up by white supremacist web sites and promoted as true, he and others, particularly major media outlets, would be able to point out that the stories were entirely fiction, created only as bait, and that the white supremacists were promoting such as fact. But instead, thousands of people with no formal association with white supremacists groups shared these stories as fact – reaching millions more people. He wrote one fake story for one of his fake sites on how customers in Colorado marijuana shops were using food stamps to buy pot. Again, this story is NOT TRUE. But it led to a state representative in Colorado proposing actual legislation to prevent people from using their food stamps to buy marijuana; a state legislator was creating legislation and outrage based on something that had never happened.

BTW, to see these fake news sites for yourself, just go to Google and search for snopes is biased, and you will get a long list of links to fake news sites, most right-wing, all fighting against debunking fact-based sites like Snopes. I refuse to name those fake news sites because I don’t want them to get any more traffic than they already do.

Competent decision-making depends on people – the decision-makers – having reliable, accurate facts put in a meaningful and appropriate context. Reason – the power of the mind to think, understand and form judgments by a process of logic – relies on being able to evaluate information regarding credibility and truth. But fact-based decision-making, the idea of being logical and using reason and intellect, have become things to eschew. The Modis Operandi for many is go with your gut, not with the facts. Go not for truth, but truthiness.

I always thought that last bullet in my list of why people believe myths, “the desire of an individual or community to believe an alternative narrative, a desire that is stronger than the facts,” was easy to address. Now, given all the aforementioned, I’m not at all sure.

I’m going to keep calling out myths whenever I see them, and if it costs me Facebook friends, so be it. I prefer the truth, even when the truth hurts, even when the truth causes me to have to reconsider an opinion. There is a growing lack of media literacy and science literacy in the USA – and, indeed, the world. And the consequences of this could be catastrophic – if they haven’t been already. People need to be able to not just access information, but also to analyze it and evaluate the source. That’s just not happening. And I’ve no idea how to change things.

Also see:

8:10 am Nov. 28, 2016 Update: Filippo Menczer, Professor of Computer Science and Informatics and Director of the Center for Complex Networks and Systems Research at Indiana University, Bloomington, authored the article Why Fake News Is So Incredibly Effective, published in Time and The Conversation. Excerpts: “Our lab got a personal lesson in this when our own research project became the subject of a vicious misinformation campaign in the run-up to the 2014 U.S. midterm elections. When we investigated what was happening, we found fake news stories about our research being predominantly shared by Twitter users within one partisan echo chamber, a large and homogeneous community of politically active users. These people were quick to retweet and impervious to debunking information.” Also of note: “We developed the BotOrNot tool to detect social bots. It’s not perfect, but accurate enough to uncover persuasion campaigns in the Brexit and antivax movements… our lab is building a platform called Hoaxy to track and visualize the spread of unverified claims and corresponding fact-checking on social media. That will give us real-world data, with which we can inform our simulated social networks. Then we can test possible approaches to fighting fake news.”

1:05 pm Nov. 29, 2016 Updates:

Donald Trump and the Rise of Alt-Reality Media: You think the truth took a hit last year? It’s about to get worse. A lot worse. from Politico.

For Some, Scientists Aren’t The Authority On Science from NPR

Dec. 3, 2016 Updates:

Spread of Fake News Provokes Anxiety in Italy from The New York Times

Dec. 6, 2016 Updates:

A North Carolina man read online that a pizza restaurant in northwest Washington, DC, was harboring young children as sex slaves as part of a child-abuse ring, so he drove six hours from his home to the restaurant, and not long after arriving, he fired from an assault-like AR-15 rifle. No one was injured, and he’s been arrested, but, as The New York Times notes,  “the shooting underscores the stubborn lasting power of fake news and how hard it is to stamp out. Debunking false news articles can sometimes stoke the outrage of the believers, leading fake news purveyors to feed that appetite with more misinformation. Efforts by social media companies to control the spread of these stories are limited, and shutting one online discussion thread down simply pushes the fake news creators to move to another space online. The articles were exposed as false by publications including The New York Times, The Washington Post and the fact-checking website Snopes. But the debunking did not squash the conspiracy theories about the pizzeria — instead, it led to the opposite. ‘The reason why it’s so hard to stop fake news is that the facts don’t change people’s minds,’ said Leslie Harris, a former president of the Center for Democracy & Technology, a nonprofit that promotes free speech and open internet policies.”

Dec. 9, 2016 update

“Fakes, News and the Election: A New Taxonomy for the Study of Misleading Information within the Hybrid Media System”

Giglietto, Fabio and Iannelli, Laura and Rossi, Luca and Valeriani, Augusto

November 30, 2016. Convegno AssoComPol 2016 (Urbino, 15-17 Dicembre 2016), Forthcoming. Available at SSRN: https://ssrn.com/abstract=2878774

Abstract:
The widely unexpected outcome of the 2016 US Presidential election prompted a broad debate on the role played by “fake-news” circulating on social media during political campaigns. Despite a relatively vast amount of existing literature on the topic, a general lack of conceptual coherence and a rapidly changing news eco-system hinder the development of effective strategies to tackle the issue. Leveraging on four strands of research in the existing scholarship, the paper introduces a radically new model aimed at describing the process through which misleading information spreads within the hybrid media system in the post-truth era. The application of the model results in four different typologies of propagations. These typologies are used to describe real cases of misleading information from the 2016 US Presidential election. The paper discusses the contribution and implication of the model in tackling the issue of misleading information on a theoretical, empirical, and practical level.

Also see: Feuds in the nonprofit/NGO/charity world

Research Explaining How Websites Encourage Volunteering & Philanthropy

graphic by Jayne Cravens representing volunteersMost practitioners in volunteer management and community engagement don’t have time to review academic literature to see if there might be information that’s helpful in their work – and even if they do have time, academic language can be inaccessible for non-academics. I try to read as much as I can and then summarize and pass on the information that can help practitioners in their work, or even just give them ammunition for a project or funding proposal.

Below are links to two academic papers that are worth at least a skim by anyone trying to use web sites to encourage philanthropy, including volunteering. The reference lists at the end of each papers are gold mines of research for further reading:

Persuasion in Prosocial Domains: Explaining the Persuasive Affordances of Volunteering
by Peter Slattery, Patrick Finnegan and Lesley Land, all three of the Australian School of Business, UNSW Australia, and Richard Vidgen of Hull University Business School, University of Hull, UK. Presented at the Twenty Second European Conference on Information Systems, Tel Aviv, 2014.

Abstract: As technology becomes increasingly pervasive and invasive, it increasingly facilitates and instigates behaviour. Prosocial behaviours, such as volunteering, activism and philanthropy, are activities that are considered to be particularly beneficial to others. Prosocial behaviours are important within IS as: (i) they are encouraged by IS stakeholders including volunteering organisations and charities, and; (ii) they contribute to tackling social issues. However, while information technology is poised to become increasingly important for facilitating prosocial behaviour, little is known about how digital artefacts can encourage it. To address this research gap, this study seeks to explain how website features persuade in prosocial online contexts. The study uses the Repertory Grid Technique (RGT) to examine individuals’ experiences of persuasion on live volunteering websites. The analysis reveals that ease of use, trust, and creating positive emotion are important factors in persuading users to volunteer.

Examining How Perceptions of Websites Encourage Prosocial Behaviour
by Peter Slattery, Patrick Finnegan and Richard Vidgen of Australian School of Business, UNSW Australia. Presented at the Thirty Seventh International Conference on Information Systems, Dublin 2016.

Abstract: Organisations are increasingly reliant on information and communications technology (ICT) to encourage prosocial behaviour (i.e., volunteering, philanthropy and activism). However, little is known about how to use ICT to encourage prosocial behaviour. Given this research gap, the objective of this study is to outline and test a research model that assesses the role of specific perceptions of websites in encouraging prosocial behaviour. To do this, we review the literature to derive a theoretical model of relevant perceptions. We then test the extent to which this model can predict participants’ volunteering and philanthropic behaviour subsequent to their usage of a website that encourages prosocial behaviour. The findings are expected to contribute by (i) giving insights into how perceptions of websites encourage prosocial behaviour, (ii) explaining the roles of negative and positive affect in ICT domains, and (iii) developing a “persuasiveness of website scale” to help IS researchers to measure this construct.

In addition, Mr. Slattery’s 2016 PhD thesis is Explaining How Websites Are Used to Encourage Volunteering and Philanthropy. The thesis restricted from public access until March 2018, but some of its research is repeated in the aforementioned papers.

Also see this list of research and evaluations of virtual volunteering, as a practice in general or focused on specific projects, on the Virtual Volunteering wiki.

UN, NGO efforts to counter hate

UNLogoOn December 2, 2015, the United Nations Alliance of Civilizations (UNAOC) held a Symposium on Hate Speech in the Media, with senior officials calling for a global mobilization of citizens to help counter messages that promote xenophobia, violent extremism and prejudice. The symposium was the first of a series that UNAOC will host, called Tracking Hatred,. The next symposium will be held in Baku, Azerbaijan, in April.

The UN Counter Terrorism Executive Directorate (CTED) also organized two days of panel discussions later in December, a collaboration between the public and private sector, called “Preventing Terrorists from Exploiting the Internet and Social Media to Recruit Terrorists and Incite Terrorist Acts, While Respecting Human Rights and Fundamental Freedoms.”

@unaoc, @friendsunaoc, @UN-CTED and other agencies, UN and non-UN alike, are using #SpreadNoHate and #Reclaimingtheweb on Twitter to promote messages from these efforts. I’ll be using them as well, as appropriate, often.

Cristina Gallach, UN Under-Secretary-General for Communications & Public Information, said during the UNAOC event, “Hate speech has been with us for a long time. We will never forget the slaughter of over 800,000 Tutsis and moderate Hutus during a brief three month period in Rwanda in 1994. We will never forget either the six million Jews plus five million others who perished because of one hateful vision… Today, however, more than ever, individuals are using hate speech to foment clashes between civilizations in the name of religion. Their goal is to radicalize young boys and girls, to get them to see the world in black and white, good versus evil, and get them to embrace a path of violence as the only way forward.” She wasn’t just referring to Daesh (also known as ISIL or ISIS), though they are the most high-profile right now and, therefore, they were the primary focus of this event.

From what I’ve read about the symposium, there were lots of comments by speakers about enforcing laws that prohibit incitement of hatred or violence, and about social media companies being compelled to quickly delete content. I’m wary of this kind of talk, as governments use cries of “hate speech” to arrest people that are critical of the government or a religion, such as this 14-year-old boy in Turkey, or these teens in Egypt, or Raif Badawi in Saudi Arabia. I much prefer strategies focused on communications activities that establish and promote a narrative that pushes back against hate and prejudice, and was glad to see that strategy as a focus of two of the CTED panels, one called “privacy and freedom of expression in the digital age” and another that I am very interested in, called “Use of Internet and communications technology for counter-messaging purposes” – the link goes to webcast of the panel, moderated by Steven Siqueira, Acting Deputy Director, CTITF Office- UN Counter-Terrorism Centre (UNCCT) – so wish there was a transcription from this panel! If you want to listen to just a bit, here’s my absolute favorite: go to around the 14:00 point and listen to Humera Khan, Executive Director of Muflehun – she gives realistic, practical advice on mobilizing youth to counter online messages of hate. And then listen to Jonathan Birdwell of the Institute for Strategic Dialogue, right afterward, talking about teaching young people to critically engage with what they read online, and the importance of digital literacy. And then jump to around 36:00 and listen to Abdul-Rehman Malik, who has a provocative, assertive, right-on challenge to governments on this subject. The questions and answers after these three present is worth your time as well. The entire session lasts about 90 minutes, and is really worth your time to listen to (please, UN, release it as a podcast!).

I hope the people involved in these UN and civil society efforts know that, in the last 24 hours, Muslims on Twitter have hilariously trolled a Daesh leader’s call to violence – humor is a powerful tool in fighting against prejudice, and these tech-savvy Muslims are doing it brilliantly. I hope they know about online groups like Quranalyzeit and Sisters in Islam, tiny organizations doing a brilliant job online of countering extremist messages regarding Islam, and doing it as Muslims and from an Islamic perspective. Or about Mohamed Ahmed, a middle-aged father and gas station manager, and one of many Muslims in Minneapolis, Minnesota frustrated by Daesh’s stealthy social media campaigns, and countering it with a social media campaign of his own, AverageMohamed.com.

AND I HOPE EVERYONE KEEPS TALKING. Because I think they are talking about activities and messages that will really work in stopping the violence, and will make all aid and development efforts – about water, about reproductive health, about agricultural, WHATEVER – actually work, actually be sustainable. I so wish all of these efforts were getting more attention online, in traditional media, among all United Nations agencies, among NGOs, and among politicians.

Also see:

Propaganda for good (blog)

Recommendations for UN & UNDP in Ukraine to use Twitter, Facebook, Blogs and Other Social Media to Promote Reconciliation, Social Inclusion, & Peace-Building in Ukraine (PDF)

Reconciliation (a blog of frustration I wrote while working in Ukraine in 2014)

Propaganda for good

I am fascinated with propaganda – information meant, specifically, to encourage a particular way of thinking – and with social engineering, the social science regarding efforts to influence attitudes and social behaviors on a large scale – call it propaganda for good.

Propaganda is communications not just to create awareness, but to persuade, to change minds, and to create advocates. It’s communications for persuasion. These are communications activities undertaken by governments, media, corporations, nonprofits, public health advocates, politicians, religious leaders/associations, terrorist groups, and on and on, and they aren’t automatically bad activities: such messaging has inspired people to wear seat belts even before there were laws requiring such, to not drink and then drive, to engage in activities for sex that prevent HIV, to read to their children, to spay and neuter their pets, to a lessening of intolerance among different groups, and on and on.

I use these techniques myself, to a degree, in trying to get nonprofits and government agencies to embrace virtual volunteering and in recruiting for diversity and in creating welcoming environments for everyone at nonprofit organizations and within government initiatives. I’m not just trying to create awareness about those concepts and practices; I’m trying to create buy-in for them, to break down resistance to them, to get initiatives to embrace them. I’m evangelizing for those concepts.

My fascination with propaganda is why I track how folklore, rumors and urban myths interfere with development and aid/relief efforts, and government initiatives, and how to prevent and address such. That subject was almost my Master’s Degree thesis; I decided the data I’d collected before I abandoned the idea of it being my thesis was too helpful not to publish, and I’ve continued to research this topic and update this resource. And I have attempted to apply my elementary understanding of social engineering in my work, most recently when I drafted Recommendations for UN & UNDP in Ukraine to use Twitter, Facebook, Blogs and Other Social Media to Promote Reconciliation, Social Inclusion, & Peace-Building in Ukraine (PDF); it offers considerations and recommendations for social media messaging that promotes reconciliation, social inclusion, and peace-building in Ukraine, and provides ideas for messaging related to promoting tolerance, respect and reconciliation in the country, and messaging to counter bigotry, prejudice, inequality, misperceptions and misconceptions about a particular group of people or different people among Ukrainians as a whole.

My fascination with communications for persuasion, not just awareness, is also why I’m fascinated with the rhetoric in the USA about how Daesh – what most Americans, unfortunately, call ISIS, ISIL or the Islamic State – uses social media to persuade. There are few details in the mainstream media and in politicians’ rhetoric on how this is really done – just comments like “He was radicalized by ISIS on Twitter,” which makes it sound like the app is somehow causing people to become terrorists. That’s why I was so happy to find this blog by J.M. Berger, a nonresident fellow in the Project on U.S. Relations with the Islamic World at Brookings and the author of “Jihad Joe: Americans Who Go to War in the Name of Islam”. The blog, “How terrorists recruit online (and how to stop it),” provides concrete information on how Daesh uses social media to recruit members – and it sounds a lot like the same techniques various cults have used to recruit members, before social media. The blog also provides concrete ways to counter the message, and how reporters can avoid robotically amplify the Daesh message.

Here’s the manual that Al Qaeda and now ISIS use to brainwash people online, which provides an outstanding summary of what it says – that echoes the aforementioned analysis.

December 28, 2015 addition: in an analysis paper released in early 2015, J.M. Berger and Jonathon Morgan, as part of the The Brookings Project on U.S. Relations with the Islamic World, answer fundamental questions about how many Twitter users support ISIS, who and where they are, and how they participate in its highly organized online activities. It notes that, in its 2014 tracking of Twitter accounts that support ISIS, 1,575 of them tweeted more than 50 times per day on average, with 545 tweeting more than 150 times per day. “These prolific users—referred to in ISIS social media strategy documents as the mujtahidun (industrious ones)—form the highly engaged core of ISIS’s social media machine. These users may not tweet every day, but when they do, they tweet a lot of content in a very short amount of time. This activity, more than any other, drives the success of ISIS’s efforts to promulgate its message on social media. Short, prolonged bursts of activity cause hashtags to trend, resulting in third-party aggregation and insertion of tweeted content into search results. Prior to the start of Twitter’s aggressive account suspensions, highly organized activity among the mujtahidun—who at one point we may have numbered as many as 3,000, including bots—allowed ISIS to dominate certain hashtags and project its material outside of its own social network to harass and intimidate outsiders, as well as to attract potential recruits.”

And here’s another article I was pleased to find, Fighting ISIS online, talking about the tiny and not-so-effective effort to counter Daesh online, and which notes:

Humera Khan, executive director of Muflehun (Arabic for “those who will be successful”), a Washington, D.C., think tank devoted to fighting Islamic extremism, says people like her and (Paul) Dietrich who try such online interventions face daunting math. “The ones who are doing these engagements number only in the tens. That is not sufficient. Just looking at ISIS-supporting social-media accounts—those numbers are several orders of magnitude larger,” says Khan. “In terms of recruiting, ISIS is one of the loudest voices. Their message is sexy, and there is very little effective response out there. Most of the government response isn’t interactive. It’s a one-way broadcast, not a dialogue.”…

Social-media research has shown that messages from friends and peers are more persuasive than general advertising. Other bodies of research show that youth at risk of falling into many kinds of trouble, from drugs to gangs, often benefit from even small interventions by parents, mentors, or peers. But so far, major anti-ISIS programs don’t involve that kinds of outreach.

That emphasis is mine. I find these articles fascinating – and woefully ignored by governments and moderate Muslims in the fight online, and via traditional media, against Daesh.

This article from The Atlantic explores the strategy further: “ISIS is not succeeding because of the strength of its ideas. Instead, it exploits an increasingly networked world to sell its violent and apocalyptic ideology to a microscopic minority—people who are able to discover each other from a distance and organize collective action in ways that were virtually impossible before the rise of the Internet.”

I would love to see moderate, peace-focused Islamic social groups with a good understanding of online communications, like MuflehunQuranalyzeit and Sisters in Islam, receive grants to hire more staff, train other organizations, and create a MUCH larger, more robust movement on social media with their loving, pro-women, Islamic-based messages. Such tiny organizations are doing a brilliant job of countering extremist messages regarding Islam, and doing it as Muslims and from an Islamic perspective. But they are drowned out by Daesh. Governments also need to not do this.

December 11, 2015 addition:  Mohamed Ahmed, once a typical middle-aged father and gas station manager, is one of many Muslim Minneapolians to do whatever he can to fight extremism in his state. Frustrated by the Islamic State’s stealthy social media campaigns, Mr. Ahmed decided to make a social media campaign of his own. Ahmed has used his own money to produce and develop his website, AverageMohamed.com. On his site, Ahmed creates cartoons and videos so average people can share “logical talking points countering falsehood propagated by extremists.” More about how Minnesota Muslims work to counter extremist propaganda.

The reality is that the Hulk, Smash! strategy will not work to fight terrorist ideology and the violent results of such. Nazism survived the bombing and defeat of Nazi Germany. Bombing cities is not what marginalized the Ku Klux Klan, and bombing cities does not stop people like (and that have supported the ideas of) Timothy McVeigh or Eric Rudolph or Jim Jones. We know what’s work. Let’s fund it and do it.

Index of my own communications advice