Tag Archives: behavior

Cultivating Online Civility

When I began writing about online culture, back in the late 1990s, misinformation was at a minimum and easy to identify, and hateful trolls were oh-so-quickly banned from the online communities they tried to disrupt.

Now, hate and misinformation rage online, and not just among strangers – neighbors are raging against each other on local online communities.

Back in the 1990s, in promoting virtual volunteering – using the Internet to support and involve volunteers – people who were new to the Internet (yes, there used to be such people) would ask lots of questions about what it is like to work with people remotely, rather than onsite, in-person. I created a section of the Virtual Volunteering Project web site, and then my own web site, specifically to talk about online culture, about the different ways people expressed themselves online and how to appreciate those differences, and how to quickly ramp up your skills for working with others online. I linked to some netiquette guidelines, but didn’t put much emphasis at all on online civility, dealing with trolls or addressing misinformation.

My, how times have changed…

A recent Wall Street Journal investigation revealed that Facebook was aware of its Facebook groups feature’s polarizing tendencies as early as 2016, and the Facebook groups feature continues to serve as a vector for lies, especially regarding COVID-19, as this Wired article, Facebook Groups Are Destroying America, notes:

Facebook users have been seeing more content from “friends and family” and less from brands and media outlets… Dynamics in groups often mirror those of peer-to-peer messaging apps: People share, spread, and receive information directly to and from their closest contacts, whom they typically see as reliable sources. To make things easier for those looking to stoke political division, groups provide a menu of potential targets organized by issue and even location; bad actors can create fake profiles or personas tailored to the interests of the audiences they intend to infiltrate. This allows them to seed their own content in a group and also to repurpose its content for use on other platforms... Related memes and links to fringe right-wing websites have been shared millions of times on Facebook in the past few months. Users coordinating their activities across networks of groups and pages managed by a small handful of people boost these narratives. At least nine coordinated pages and two groups—with more than 3 million likes and 71,000 members, respectively—are set up to drive traffic to five “news” websites that promote right-wing clickbait and conspiracy theories. In May, those five websites published more than 50 posts promoting Obamagate, which were then shared in the linked pro-Trump groups and pages. The revolving door of disinformation continues to spin.

And that doesn’t even begin to address the problems with dedicated trolls – people who target others online with insults and harassment in an effort to drive the person offline.

I now have a curated list of resources on online civility, and I continue to update my long list of recommendations on how to address online misinformation, which I’ve been maintaining for more than two decades. I also now have a web page of resources regarding online harassment, defamation & libel, and I regularly share on the TechSoup Online Community about how women worldwide are the frequent targets of harassing trolls who dedicate their time to silencing those voices. I never dreamed back in the 1990s things would be so overwhelmingly negative now and these would be the highly critical issues that they are. But, here we are.

Can online civility be restored? Is it possible to challenge misinformation and destructive speech in the strongest, most deliberate of terms without being accused of hate speech yourself? Can there be rules for online civility that don’t stifle much-needed debate? I hope these curated resources can help answer those questions – but, honestly, based on what I’ve experienced myself this year, I’m deeply skeptical. Perhaps I need to create a list of resources on “Learning to live and thrive in a world with hateful, hate-filled people.”

Also see:

Also, the Last Virtual Volunteering Guidebook: Fully Integrating Online Service Into Volunteer Involvement can help you better work with people online – specifically volunteers. These can be volunteers in short-term, “microvolunteering” tasks or longer-term, more high-responsibility roles. These can be volunteers who do some or most of their service onsite, at your organization or volunteers who do most or all of their service remotely, rarely or ever onsite and in-person with you. This is the most comprehensive resource anywhere on working with online volunteers, and on using the Internet to support ALL volunteers, including those you might not think of as “online” volunteers.

If you have benefited from this blog or other parts of my web site and would like to support the time that went into researching information, developing material, preparing articles, updating pages, etc. (I receive no funding for this work), here is how you can help

How to be active & anonymous online – a guide for women in religiously-conservative countries

In the world in which we all live, most people have to be online, regularly:

  • There is essential government and business information that can be accessed only online, or can be accessed most cheaply and easily online.
  • There is breaking news that can affect a person’s life or livelihood and, therefore, needs to be learned as close to real-time as possible – and that could happen only online.
  • There is information related to our work that is most quickly, easily accessed online.

And “online” includes using social media, such as Facebook and Twitter.

However, in many religiously-conservative communities around the world, women take a huge risk by being online, specifically in using social media. I explore this in a blog I wrote called virtue & reputation in the developing world. Because of threats to their reputation and safety, many women in religiously-conservative countries such as Afghanistan and Pakistan have given up on having a virtual identity at all – I personally know two such women, both professionals. This greatly hinders their ability to connect with potential colleagues abroad that could help them in their work, to build up a professional reputation beyond the walls of their office or beyond the staff of the organization, and to access information essential for their work and life.

There are some ways for women to develop an online profile on social media, including Facebook, that allows them to access essential information, to post information and to network with professionals in their field of expertise, but still protect identities online. Here are some guidelines:

Choose a first and last name you will use online only
These should be names that are different from your real names. However, also try to create a name that isn’t a real name for someone else. You can also use just an initial for your first name – one letter.

Create an email address for your anonymous profile
Gmail is a good choice. Use something that in no way involves your real name. Associate this with social media accounts, rather than your work or university email address.

Be vague online about your employer or university
On any social media site, such as Facebook, do not say the full, real name of your employer or the university where you currently attend. Identify yourself more vaguely, such as:

  • employee of an Afghan government ministry
  • assistant at a Egyptian dental office
  • nurse at a hospital in Kuwait
  • student at a university in Kabul

Be careful who you friend on Facebook.
Talk to people face-to-face that you trust and that know your real name if you want to friend them on Facebook, if you can, and tell them why it is so important that they keep your identity a secret if you link on social media. If you have an argument with that person, will he or she reveal your true identity online? You must friend only people who you can trust who know your real name, and those people need to understand that they must NOT tell others who you are online or make comments that would reveal who you are. When in doubt, don’t friend local people at all and just focus on international colleagues who fully understand your situation or do not know you offline at all.

Do not share photos of yourself where your face can be seen
You can share photos of yourself on social media where your identity cannot be determined. For instance, if you were standing with your back to the camera, and not wearing distinctive clothing. Or a photo of just your hands.

Do not share photos of family or friends
This could make it easier for people to figure out who you are.

Have a physical address that isn’t your home or workplace
Sometimes, to register on a particular web site, you must provide a physical address of either your home or work place. Pick a public place as the address you will use: a public library or a book store are good choices. Those places may end up getting paper mail addressed to your fake identity, and that’s okay: there is no way for this to be traced back to you and it won’t be mail you want. Never use your actual home, work place or university address for your anonymous profile.

Post status updates that do not indicate your identity
You can share memes and news stories (always verify them first and ensure they are true), write status updates about the weather, write your opinion of current affairs, or offer advice related to your country or your profession. But don’t write specifics, such as “I just attended a great class on the state of water and sanitation in Luxor”, as that’s too specific and could be used by someone who reads it to figure out who you are.

Be careful when commenting on the Facebook status updates of friends
If one of your colleagues posts a status update, and you comment that “I look forward to talking to you about this at the staff meeting on Monday at 4”, one of their other friends who is NOT your online friend may figure out who you are. Instead, you could say, “I look forward to talking to you about this soon.”

Never use this anonymous account from work
The risk is too great of someone seeing your screen, or your walking away from your desktop and someone using the “back” button to scroll through the screens you have visited and find that you forgot to log out of Facebook – they will be able to see your anonymous profile as a result.

Be careful about posting in online discussion groups
There are online discussion groups regarding topics related to your work. By all means, join such a forum and read the posts. But be careful about posting, including replying to others. When you post, you reveal your IP address. This will NOT reveal your name, your home address, your age, etc. But your IP address may reveal where you work IF you are accessing the group from your workplace’s Internet connection and if that connection is configured a certain way.

Practice denying your online activities
People are going to ask you if you are on Facebook or Twitter. Practice saying no. Also practice your response to someone who says, “Is so-and-so on Facebook really you?”

If someone you do not know starts messaging your fake account, be careful about engaging with them. If they are asking “Who are you?” or “Why did you say that?”, ignore them. If they are asking how you know a shared friend, ignore them. If they become insulting, block them. If they say they are a reporter and they saw your post somewhere and would like to interview you, ask them what newspaper or TV station they work for, ask for their full name, and then look up that organization online and call them and ask if that person works there. In other words, make absolutely sure it’s a REAL journalist that is asking you questions!

If anyone threatens you online, screen capture those messages and save them. If anyone threatens you online with physical harm in any way and you believe that person could figure out who you are, it may be best for you to block them and delete your account. Your safety is always paramount and you should do what you need to do to stay safe.

Why am I not recommending that a person contact the company that operates the platform or social media site to report harassment, or to contact local police department? That is certainly an option if you live in a country that has rule of law. However, if you live in a developing country or a country that has laws that censor Internet access, such reporting could actually put you in danger. Even so, hold on to your screen captures of threatening messages and share them with a person you trust if you feel they represent a real threat to you or your family.

Update April 16, 2019: The Kandahar field office of UN Assistance Mission in Afghanistan (UNAMA) hosted a discussion with 20 women representatives of civil society, local media, provincial council members, teachers and university students active on social media. The participants agreed that social media campaigns and platforms are important means of advocacy for women to play their role in peace process. Balancing the pros with the cons -such as risks of harassment from trolls and others- they created a closed social media group dedicated to empowering women. In southern Afghanistan, as in other parts of the country, women are largely left out of decision-making and peace processes. Gender-based violence is prevalent and women are not visible in many public domains because of family and other cultural restrictions. The limitations apply to social media as well with indicators showing that, despite the potential, very few women in the southern region are active in this sphere. See more via this UNAMA Facebook update.

Update February 5, 2021: Use a virtual private network (VPN), an encrypted internet connection that allows users to safely transmit sensitive data, preventing unauthorized user access. A VPN can hide your location – start the software and pick a different city than where you actually are, so that if anyone has sophisticated tech tools and skills, they CANNOT see what city or even what country you are really in. Here’s a decent article comparing VPNs. Put the software on your computer AND your smart phone!

Also see:

Orange Day: UNiTE to End Violence Against Women campaign

The United Nations Secretary-General’s UNiTE to End Violence against Women campaign, managed by UN Women, has proclaimed every 25th of the month as “Orange Day” – a day to take action to raise awareness and prevent violence against women and girls. Orange Day calls upon activists, governments and UN partners to mobilize people and highlight issues relevant to preventing and ending violence against women and girls, not only once a year, on 25 November (International Day for the Elimination of Violence against Women), but every month.

Orange Day 2017 action themes so far:

February: Violence Against Women and Girls and Women’s Economic Empowerment

March: Violence against Women and Girls with Disabilities

April: Violence against indigenous women and girls

May: Mobilizing resources to end violence against women and girls

June: Violence against women and girl refugees

July’s theme was Cyber violence against women. The official statement from UNiTe notes: “Although children have long been exposed to violence and exploitation, ICTs have changed the scale, form, impact and opportunity for the abuse of children everywhere. While both girls and boys are vulnerable to the different risks and harms related to the misuse of ICTs, girls have been disproportionately victimized in sexual abuse and exploitation through the production and distribution of child sexual abuse materials. In 2013, 81 per cent of child sexual abuse materials depicted girls. Girls are also particularly vulnerable to being groomed online for sexual encounters and sometimes exploited through live streaming of their sexual abuse. Many children are experiencing widespread victimization through online bullying, harassment, and intimidation, where girls are particularly targeted due to gender norms and power dynamics. Gender discrimination, lack of confidence, difficulty with language, poverty, and cultural factors can adversely affect girls and lead to their heightened vulnerability to these crimes and victimization.” SDG 5 of the Sustainable Development Goals (SDGs) is focused on Gender Equality, and places women’s access to technological empowerment as one of the core indicators for progress. “To achieve this goal, we must make sure that the internet will be a safe and more secure place that allows all women and girls to fulfill their potential as valued members of society and live a life free from violence.”

UNiTE has curated several resources related to such:

  • The Broadband Commission Working Group on Digital Gender Divide recently published a set of recommendations that specifically addresses threats aimed both at promoting better understanding and awareness of the ways in which women experience threats, and ensuring that stakeholders help to make the Internet and its use safer for women (page 32). Proposed actions include researching and understanding threats, increasing awareness of threats and how they can be addressed or reduced, developing safety applications and services and strengthening protection measures and reporting procedures.
  • The “Perils and Possibilities: Growing up Online” report, recently published by UNICEF, provides a glimpse into young people’s opinions and perspectives on the risks they face coming of age in a digital world.UNICEF is collaborating with companies, governments and civil society to promote children’s rights related to the Internet and associated technologies. Take a look at their online depository of new business tools and guidance on child online protection which among others includes useful resources, learning materials, and tools for companies.
  • UNICEF is collaborating with companies, governments and civil society to promote children’s rights related to the Internet and associated technologies. Take a look at UNICEF’s online depository of new business tools and guidance on child online protection which among others includes useful resources, learning materials, and tools for companies.
  • The Guidelines for Child Online Protection, prepared by ITU, outline best practices and key recommendations for different interest groups, including policy makers, industry, children, as well as parents, guardians, and educators. More resources on Child Online Protection from ITU’s database.
  • INHOPE is an active and collaborative global network of Hotlines, dealing with illegal online content and committed to stamping out child sexual abuse from the Internet. The network offers a way of anonymously reporting Internet material including child sexual abuse material they suspect to be illegal.
  • Launched in January, HeartMob is a project of Hollaback!, a non-profit organization powered by a global network of local activists who are dedicated to ending harassment in public spaces. The platform provides real-time support to individuals experiencing online harassment and empowers bystanders to act.

It’s also worth reading Women’s Rights Online, a report from 2015 from the Web Foundation that shows that the dramatic spread of mobile phones is not enough to get women online, or to achieve empowerment of women through technology. The study, based on a survey of thousands of poor urban men and women across nine developing countries, found that while nearly all women and men own a mobile phone, women are still nearly 50% less likely to access the Internet than men in the same communities, with Internet use reported by just 37% of women surveyed (vs 59% of men). Once online, women are 30-50% less likely than men to use the Internet to increase their income or participate in public life. The report says young people are most likely to have suffered harassment online, with over six in 10 women and men aged 18 – 24 saying they had suffered online abuse. The Web Foundation was established by Web inventor Sir Tim Berners-Lee

Also see:

How to change minds

I’m a part of the March for Science Facebook group, for people that were in the Marches for Science all across the USA on April 2017 or that supported such. A lot of the talk on the group has been about science education and public relations. There are individuals and communities all over the USA – and the world – fighting against science-based decision making in public policies and science education in schools, and many on the group feel this is because of poor wording and poor outreach by scientists and those that support science regarding public relations. In my ongoing quest to be a better communicator, I’ve watched these discussions closely.

Recently, someone posted the following regarding how we communicate about science. I think it’s a great testimony regarding what works, and what doesn’t, regarding swaying public opinion, changing people’s minds and fighting misinformation. I’m sharing it here, with her permission, but without her name to protect her identity:

I’m not a scientist. I’m not afraid of science but I also don’t have a strong grasp of most science related jargon. I joined this group along with a few other science groups/pages as I heard more and more of anti-science rhetoric from our govt. Allthough I don’t understand a lot of scientific things that doesn’t mean I don’t realize the importance of science for our society and for our future.

I have learned SO MUCH from reading posts and comments. The reason I have learned so much? The reason I am no longer “afraid” of GMO’s? The reason I have changed my mind on other popular misconceptions? Because my fear was never the science. My fear was that I didn’t know what information to trust. Money talks. It’s hard to figure out who is paying. Do I trust a science study that was paid for by a big corporation? Do I trust a study that’s published but not peer reviewed? WHO do you trust?

The common thread I’ve found as I read posts and comments in order to learn more is how stupid I am. How dumb was I to not trust GMO’s. People’s comments were blatantly MEAN. And sure, I was completely uneducated about GMO’s. I read the wrong information. I trusted the wrong sources. But again, without hours of research to find out funding sources, etc HOW do I know what to trust?

This question was amazing. I always want to learn more. I want to understand about so many things – to give my kids the best future possible. The best food to eat. The best meds for my asthmatic child. The best environment for them to grow up in, etc. But here’s the thing. If I wasn’t determined to do the best for my kids . . . by the 100th ridiculing comment on a post I found interesting I would have stopped following and learning. Heck by the 20th I would have written off these sciences pages.

Even in this thread there are those using terms like “stupid,” “brainwashing,” etc. Very derogatory terms and grouping all people who don’t have a knack for science into one realm. I have a great head for business, finances and can analyze the heck out of any non-technical literature. I don’t make fun or ridicule those people who don’t have have that ability. It accomplishes nothing.

So thank you to those of you who answered this post thoughtfully. I’m certain there are many of you who diligently try over and over again to get your point across. Don’t give up. Changing peoples’ minds is never easy but in this case it’s worth the fight.

—end quoted text—

Also see:

Behavioural Insights at the United Nations – Achieving the 2030 Agenda

The United Nations has embraced the use of behavioral science to help it craft effective development activities and interventions. As it notes on this November 2016 blog:

Across the globe, all people – poor or rich – sometimes make choices that are not conducive to their own well-being. Saving enough for retirement, eating healthy, investing in education – all too often we humans postpone intended actions to ‘tomorrow’, succumb to inertia or get stuck in habits.

In light of the extensive research on the cognitive biases that influence human decision-making, there is a broad consensus that traditional economic models are insufficient for effective policy-making. Behind every policy lie assumptions about how humans will behave in light of new regulations and why we act the way we do.

UNDP has embraced the idea of network nudges, where people are influenced by the behavior of friends and members of their extended social network, and that people observe other people’s behavior as guidelines for what’s acceptable and desirable. UNDP has been cooperating with the UK Behavioural Insights Team since 2013, and UNDP’s report, Behavioural Insights at the United Nations – Achieving the 2030 Agenda, advocates this approach for inclusion in every policy maker’s toolbox and presents 10 valuable case studies. This is from the page at the aforementioned link:

In 2016, the UNDP Innovation Facility collaborated with the newly engaged UN Behavioural Science Advisor to work on behaviorally-informed design with 8 UNDP Country Offices in all 5 regions: Bangladesh, Cameroon, China, Ecuador, Jordan, Moldova, Montenegro and Papua New Guinea. This Progress Report highlights the potential of behavioural insights to help achieve the Sustainable Development Goals and provides an overview of the 8 initiatives.

Behavioural insights draw from research findings from psychology, economics and neuroscience. These insights about how people make decisions matter for development. They matter for policy-formulation and addressing last mile problems.

UN Secretary General Ban Ki-moon noted that, “In order to succeed, Agenda 2030 must account for behavioural insights research… Our organization, our global agenda – and most importantly the people worldwide they are intended to serve – deserve nothing less than the best science available. A human-centered agenda requires a rigorous, research-based understanding of people.”

The report shows that approaching development challenges with behavioural insights leads to better diagnoses of problems and to better designed solutions. Public policy and programme officials around the world can achieve better outcomes — often at low or no cost — simply by leveraging our current understanding of human psychology and behaviour.

In January 2016, the UN Secretary-General appointed two “Behavioural Insights Advisors” for initially six months. They worked with the UNDP Innovation Facility to improve uptake of an e-waste recycling solution in China, crowdfunding efforts for green energy in Ecuador, the anti-corruption initiative ‘Phones Against Corruption’ in Papua New-Guinea, and more.

Wikipedia actually has some good pages that provide an overview of these and related subjects:

And here are some of my own resources on these and related subjects:

ICTs to reach & educate at-risk communities

Apps, social media, text messaging/SMS and other information and communication technologies (ICTs) are already playing a crucial role in educating people regarding public health issues, reaching marginalized communities and helping those that may be targets of harassment and discrimination. But in all of these tech4good initiatives, the importance of safety and security for those doing the outreach and those in the target audience is critical. People trying to promote a tech4good initiative do not want the technology to be used by hostile parties to identify, track and target people based on their health, lifestyle or beliefs.

For those interested in using ICTs to reach marginalized communities, or those interested in how to communicate vital information about topics that are frowned-upon in religiously conservative communities, the new publication Pioneering HIV services for and with men having sex with men in MENA: A case study about empowering and increasing access to quality HIV prevention, care and support to MSM in a hostile environment, is well worth your time to read. The United States Agency for International Development (USAID) funded this project, and the 48-page publication was produced by the International HIV/AIDS Alliance and co-authored by Tania Kisserli, Nathalie Likhite and Manuel Couffignal. The publication includes two pages on how ICTs help to reach hidden communities threatened by police raids and rising homophobia in the MENA (Middle East and North Africa) region – for instance, how applications such as Grindr that are frequently accessed by men having sex with men (MSM) in the MENA region and provide virtual venues for disseminating information on HIV prevention, treatment and support services.”

The publication includes two pages on how ICTs help to reach hidden communities threatened by police raids and rising homophobia in the MENA (Middle East and North Africa) region – for instance, how applications such as Grindr that are frequently accessed by men having sex with men (MSM) in the MENA region and provide virtual venues for disseminating information on HIV prevention, treatment and support services.”

This is from the report (note that this is with British spellings):

In 2015, the partners of the MENA programme implemented a pilot online peer outreach project to reach more MSM, in partnership with the South East Asian Foundation B-Change Technology.

In order to improve the understanding of the online habits and behaviours of MSM, two anonymous web surveys were launched online to collect information among MSM (living in Algeria, Lebanon, Morocco and Tunisia), recruited via Facebook and instant messaging channels. The first survey assessed technology use and included questions about mobile devices and tech-based sexual networking. The second survey collected further data on social media behaviours, with questions about using social networks, interpersonal communications, and negative experiences online. The results confirmed the penetration of internet and mobile technologies in urban centres, and highlighted the widespread use by MSM of mainstream social networks (predominantly Facebook) and global gay dating apps, especially in the evening. The predominant website for sexual networking was reported to be Planet Romeo; the predominant smartphone app for sexual networking was Grindr. The results also revealed that while MSM use smartphone instant messaging (SMS and Whatsapp mainly) to communicate and chat with friends, they tend to use the telephone when communicating with health providers. Sexual networking among this cohort demonstrated a preference for web-based methods versus offline (public space) networking. A significant proportion of negative experiences using social media or apps was also reported, in particular cases of breach of confidentiality online.

Based on these findings, the partners designed a pilot information and communications technology (ICT)-based intervention. Experienced peer educators created avatars representing different profiles of beneficiaries, collectively designed an online peer outreach intervention and developed the corresponding standard operating procedures and M&E framework. This was identified as the most feasible output based on existing resources and ICT experience. Building the capacity of community groups for this intervention would result in more effective use of popular social media platforms for MSM-peer outreach activities. Local trainings of ‘online peer educators’ were organised to strengthen digital security, content creation systems, online outreach procedures, conduct of peer educators online, and M&E framework to measure the outcomes towards the HIV continuum of care.

The trained ‘online peer educators’ created ‘virtual peer educators’ accounts/profiles and contacted MSM though internet and social media in their respective countries, mainly on Facebook, Whatsapp, Grindr, Hornet, Planet Romeo, Badoo, Tango and Babel, and mostly during evening and night shifts. The objective was to contact MSM not reached by the usual outreach in public spaces, and hence continue expanding the package of prevention services available to MSM. They provided interpersonal communications on HIV and STIs, disseminated IEC materials online, encouraged them to take an HIV test and referred them to prevention services provided by the partner organisations, as well as public health services in their country.

This test phase lasted from July to September 2015 in Agadir, Beirut, Tunis and Sousse. The results were promising; during the month of September 2015, the six online peer educators of ASCS in Agadir for instance reached 546 MSM via chat rooms, websites, apps and instant messaging. They referred 148 MSM for an HIV test and 86 MSM for an STI consultation. During this period ASCS noticed an increase of number of MSM visiting the association to collect condoms and lubricant; ASCS peer educators appreciated this new type of outreach work compared to street outreach, the latter being uneasy due to growing harassment of police. Some challenges that peer educators faced online were similar to ‘traditional’ or face-to-face outreach work: high interest in sexual health, initially reluctance to visit association or uptake services, or to change risk behaviour.

“The virtual prevention pilot project has allowed us to reach a significant number of MSM, in particular those who remain hidden and aren’t reached through our outreach activities in the streets.” — peer educator and university student in Morocco

Some of the lessons learned from this pilot project:

  • Overall high acceptability: many MSM are eager to engage in an online conversation about HIV and STI prevention, rights and services; virtual spaces are perceived as safe to talk freely about sexual practices with no face-to-face bias; however, a significant proportion of MSM contacted online refused any discussion relating to sexual health and HIV.
  • Strong operational procedures and human resource capacity are required to maintain a high quality ICT tool that maintains privacy and confidentiality; consequently, organisational ICT capacity needs to be assessed and strengthened before initiating an online prevention project.
  • Monitoring and evaluation challenges: it is not easy to measure service use or user engagement online or to clearly show the link between use of ICT and uptake of services; monitoring of referral pathways between outreach CSOs and friendly providers needs to be aligned to track referral from virtual spaces to services.

One thing I do wonder: were any of these people involve volunteers?

Also see:

Folklore, Rumors & Misinformation Campaigns Interfering with Humanitarian Efforts & Government Initiatives

gossipUPDATED:

Preventing Folklore, Rumors, Urban Myths & Organized Misinformation Campaigns From Interfering with Development & Aid/Relief Efforts & Government Initiatives

Folklore, rumors and contemporary myths / legends often interfere with development aid activities and government initiatives, including public health programs – even bringing such to a grinding halt. They create ongoing misunderstandings and mistrust, prevent people from seeking help, encourage people to engage in unhealthy and even dangerous practices, and have even lead to mobs of people attacking someone or others because of something they heard from a friend of a friend of a friend. With social media like Twitter and Facebook, as well as simple text messaging among cell phones, spreading misinformation is easier than ever.

Added to the mix: fake news sites set up specifically to mislead people, as well as crowdsourced efforts by professional online provocateurs and automated troll bots pumping out thousands of comments, countering misinformation efforts has to be a priority for aid and development organizations, as well as government agencies.

Since 2004, I have been gathering and sharing both examples of this phenomena, and recommendations on preventing folklore, rumors and urban myths from interfering with development and aid/relief efforts and government initiatives. I’ve recently updated this information with new information regarding countering organized misinformation campaigns.

Anyone working in development or relief efforts, or working in government organizations, needs to be aware of the power of rumor and myth-sharing, and be prepared to prevent and to counter such. This page is an effort to help those workers:

  • cultivate trust in the community through communications, thereby creating an environment less susceptible to rumor-baiting
  • quickly identify rumors and misinformation campaigns that have the potential to derail humanitarian aid and development efforts
  • quickly respond to rumors and misinformation campaigns that could derail or are interfering with humanitarian aid and development efforts

And, FYI: I do this entirely on my own, as a volunteer, with no funding from anyone. I update the information as my free time allows.

Also see:

fake news, folklore & friendships

gossipIt wasn’t getting a journalism degree, or being a journalist, that made me a skeptic when it comes to sensational stories. It was a folklore class. Urban Folklore 371, to be exact. It was a very popular class at Western Kentucky University back in the late 1980s, both for people getting a degree in folklore studies and for people needing humanities courses for whatever their degree program was, like me. Class studies focused on contemporary, largely non-religious-based legends, customs and beliefs in the USA. One class might focus on watching a film about the games kids play on a playground and how those games explore the things they fear – marriage, childbirth, stranger danger, being ostracized by their peers, etc. Another class might review the difference versions of the “vanishing hitchhiker” story and why such stories are so popular in so many different cultures, and how the story changes over time.

I heard at least one student say, “That’s not a true story?! I always thought it was!” at least once in every class. Because of that class, I realized there were legends being told as truth all around me, by friends, by family, even by newspapers. “I heard it from my cousin” or “My friend saw it in a newspaper” or “My Mom saw it on Oprah” was usually the preface to some outlandish story told as fact. But the class taught me that, in fact, no woman was ever killed by spiders nesting in her elaborate hairdo, that there has never been a killer with a hook for a hand that attacked a couple in a parked car in a nearby town, that there is no actor who has never had a gerbil removed from his anus, and on and on and on.

I became the “um – that’s not true” girl at various places where I worked. And then via email. And I still am, now on social media. And what I have learned from being little Ms. Debunker is that people REALLY do NOT like these stories debunked. In fact, pointing out the facts that prove these stories aren’t true, no matter how gently I try to do it, often makes people very angry.

Back in the 1990s, a friend sent me yet another forwarded email. This time, the text said the email was from Microsoft Founder Bill Gates, that he’d written a program that would trace everyone to whom the email message was sent, and that he was beta testing the program. The email encouraged people to forward the message and said that if it reaches 1,000 people, everyone on the list would receive $1,000. Of course, it wasn’t true – I knew it as soon as I saw it. She’d sent me several of these type of emails – one that said people that forwarded the message would get a free trip to Disney World, another said we’d all get free computers, and on and on. I had been deleting them, but I was tired of it. So I looked online, found a site that debunked the myth, and sent her the link. I didn’t make any judgement statements; I just said, “This is a myth. Here’s more info. You might want to let everyone know you sent to, as well as the person you got it from,” or something similar.

She was not happy with me. In fact, it almost ended our friendship. She told me that the Internet was “a place for having fun” and “you can’t win if you don’t play” and what did she have to lose by forwarding the message even if it sounded fishy?

And that kind of reaction kept happening. Three new friends I made back in 2010, after I’d moved back to the USA, all unfriended me on Facebook the same day, outraged that I pointed out several things they were posting as their status updates – about how Facebook was going to start charging users, about how putting up a disclaimer on your Facebook page would stop the company from being able to sell your information, and on and on – were all urban legends, all untrue. Their reaction was almost verbatim of what that friend via email had said: Facebook is “a place for having fun” and “it’s better to be safe and share it” and what did they have to lose by sharing the message even if it sounded fishy? Also, they said they did not have time to “check every single thing online.”

Now, in 2016, I have friends that are furious with me for posting science-based web sites that debunk their posts from quack sites like the “Food Babe” claiming that GMOs cause cancer or that vaccines cause autism (to be clear, these are MYTHS). Two journalists – JOURNALISTS – were mad at me when I pointed out that a status update one had shared – it urged users to use the Facebook check-in function to say they were at Standing Rock in North Dakota, that this would somehow prevent the Morton County Sheriff’s Department there from geotargeting DAPL protesters – was promoting false information. I wasn’t just annoyed by the message – I found it imprudent, and yet another example of slackervism or slacktivism: people truly wishing to assist the protesters were checking in on Facebook rather than doing something that would REALLY make a difference, like sending funds to support the protest efforts or writing their Congressional representatives in support of the protesters. It also misdirects people from the nefarious ways law enforcement really does surveil people on social media. I would have thought journalists would know better than engage in such behavior.

Contemporary legends online cause harm, and it’s bothered me long before the Standing Rock/Facebook book check-in myth. Since 2004, I have been gathering and sharing examples of how rumors and urban / contemporary myths often interfere with relief and development activities, and government initiatives, including public health initiatives — even bringing such to a grinding halt. These myths create ongoing misunderstandings among communities and cultures, prevent people from seeking help, encourage people to engage in unhealthy and even dangerous practices, cultivate mistrust of people and institutions, and have even lead to mobs of people attacking someone or others for no reason other than something they heard from a friend of a friend of a friend. With the advent of social media like Twitter and Facebook, as well as just text messaging among cell phones, spreading misinformation is easier than ever.

Based on my experience as a researcher and a communications practitioner, and everything I’ve read – and I read a LOT on this subject – rumors that interfere with development and aid/relief efforts and government health initiatives come from:

  • misinterpretations of what a person or community is seeing, hearing or experiencing,
  • from previous community experiences or their cultural beliefs,
  • willful misrepresentation by people who, for whatever reason, want to derail a development or relief activity,
  • unintentional but inappropriate or hard-to-understand words or actions by a communicator, or
  • the desire of an individual or community to believe an alternative narrative, a desire that is stronger than the facts

That list of bullet points was central to the long list I made of recommendations on preventing folklore, rumors and urban myths from interfering with such initiatives. I made that list to help aid workers, particularly people leading public health initiatives. For years, I’ve updated that list and felt really good about it being comprehensive and realistic, and I’ve employed some of the methods myself in my work.

But are these recommendations enough anymore? I’m not sure. Because BuzzFeed reported that fake news stories about the USA Presidential election this year generated more engagement on Facebook than the top election stories from 19 major news outlets COMBINED – that included major news outlets such as The New York Times, The Washington Post, CNN, and NBC News, and on and on. And a new study from Stanford researchers evaluated students’ ability to assess information sources, and described the results as “dismaying,” “bleak” and a “threat to democracy,” as reported by NPR News. Researchers said students displayed a “stunning and dismaying consistency” in their responses, getting duped again and again. The researchers weren’t looking for high-level analysis of data but just a “reasonable bar” of, for instance, telling fake accounts from real ones, activist groups from neutral sources and ads from articles. And the students failed. Miserably. And then there’s my own experience seeing the reaction a lot of people have to references to sites like snopes.com or truthorfiction.com or hoax-slayer.com or the Pulitzer Prize-winning site Politico that debunk myths; those people claim that “These sites aren’t true. They’re biased.” And that’s that – just a simple dismissal, so they can continue to cling to falsehoods.

National Public Radio did a story a few days ago about a man in Los Angeles who decided to build fake news sites that publish outrageous, blatantly false stories that promote stories that extreme far-right groups in the USA (also known as “alt-right”) would love to believe; he thought that when these stories were picked up by white supremacist web sites and promoted as true, he and others, particularly major media outlets, would be able to point out that the stories were entirely fiction, created only as bait, and that the white supremacists were promoting such as fact. But instead, thousands of people with no formal association with white supremacists groups shared these stories as fact – reaching millions more people. He wrote one fake story for one of his fake sites on how customers in Colorado marijuana shops were using food stamps to buy pot. Again, this story is NOT TRUE. But it led to a state representative in Colorado proposing actual legislation to prevent people from using their food stamps to buy marijuana; a state legislator was creating legislation and outrage based on something that had never happened.

BTW, to see these fake news sites for yourself, just go to Google and search for snopes is biased, and you will get a long list of links to fake news sites, most right-wing, all fighting against debunking fact-based sites like Snopes. I refuse to name those fake news sites because I don’t want them to get any more traffic than they already do.

Competent decision-making depends on people – the decision-makers – having reliable, accurate facts put in a meaningful and appropriate context. Reason – the power of the mind to think, understand and form judgments by a process of logic – relies on being able to evaluate information regarding credibility and truth. But fact-based decision-making, the idea of being logical and using reason and intellect, have become things to eschew. The Modis Operandi for many is go with your gut, not with the facts. Go not for truth, but truthiness.

I always thought that last bullet in my list of why people believe myths, “the desire of an individual or community to believe an alternative narrative, a desire that is stronger than the facts,” was easy to address. Now, given all the aforementioned, I’m not at all sure.

I’m going to keep calling out myths whenever I see them, and if it costs me Facebook friends, so be it. I prefer the truth, even when the truth hurts, even when the truth causes me to have to reconsider an opinion. There is a growing lack of media literacy and science literacy in the USA – and, indeed, the world. And the consequences of this could be catastrophic – if they haven’t been already. People need to be able to not just access information, but also to analyze it and evaluate the source. That’s just not happening. And I’ve no idea how to change things.

Also see:

8:10 am Nov. 28, 2016 Update: Filippo Menczer, Professor of Computer Science and Informatics and Director of the Center for Complex Networks and Systems Research at Indiana University, Bloomington, authored the article Why Fake News Is So Incredibly Effective, published in Time and The Conversation. Excerpts: “Our lab got a personal lesson in this when our own research project became the subject of a vicious misinformation campaign in the run-up to the 2014 U.S. midterm elections. When we investigated what was happening, we found fake news stories about our research being predominantly shared by Twitter users within one partisan echo chamber, a large and homogeneous community of politically active users. These people were quick to retweet and impervious to debunking information.” Also of note: “We developed the BotOrNot tool to detect social bots. It’s not perfect, but accurate enough to uncover persuasion campaigns in the Brexit and antivax movements… our lab is building a platform called Hoaxy to track and visualize the spread of unverified claims and corresponding fact-checking on social media. That will give us real-world data, with which we can inform our simulated social networks. Then we can test possible approaches to fighting fake news.”

1:05 pm Nov. 29, 2016 Updates:

Donald Trump and the Rise of Alt-Reality Media: You think the truth took a hit last year? It’s about to get worse. A lot worse. from Politico.

For Some, Scientists Aren’t The Authority On Science from NPR

Dec. 3, 2016 Updates:

Spread of Fake News Provokes Anxiety in Italy from The New York Times

Dec. 6, 2016 Updates:

A North Carolina man read online that a pizza restaurant in northwest Washington, DC, was harboring young children as sex slaves as part of a child-abuse ring, so he drove six hours from his home to the restaurant, and not long after arriving, he fired from an assault-like AR-15 rifle. No one was injured, and he’s been arrested, but, as The New York Times notes,  “the shooting underscores the stubborn lasting power of fake news and how hard it is to stamp out. Debunking false news articles can sometimes stoke the outrage of the believers, leading fake news purveyors to feed that appetite with more misinformation. Efforts by social media companies to control the spread of these stories are limited, and shutting one online discussion thread down simply pushes the fake news creators to move to another space online. The articles were exposed as false by publications including The New York Times, The Washington Post and the fact-checking website Snopes. But the debunking did not squash the conspiracy theories about the pizzeria — instead, it led to the opposite. ‘The reason why it’s so hard to stop fake news is that the facts don’t change people’s minds,’ said Leslie Harris, a former president of the Center for Democracy & Technology, a nonprofit that promotes free speech and open internet policies.”

Dec. 9, 2016 update

“Fakes, News and the Election: A New Taxonomy for the Study of Misleading Information within the Hybrid Media System”

Giglietto, Fabio and Iannelli, Laura and Rossi, Luca and Valeriani, Augusto

November 30, 2016. Convegno AssoComPol 2016 (Urbino, 15-17 Dicembre 2016), Forthcoming. Available at SSRN: https://ssrn.com/abstract=2878774

Abstract:
The widely unexpected outcome of the 2016 US Presidential election prompted a broad debate on the role played by “fake-news” circulating on social media during political campaigns. Despite a relatively vast amount of existing literature on the topic, a general lack of conceptual coherence and a rapidly changing news eco-system hinder the development of effective strategies to tackle the issue. Leveraging on four strands of research in the existing scholarship, the paper introduces a radically new model aimed at describing the process through which misleading information spreads within the hybrid media system in the post-truth era. The application of the model results in four different typologies of propagations. These typologies are used to describe real cases of misleading information from the 2016 US Presidential election. The paper discusses the contribution and implication of the model in tackling the issue of misleading information on a theoretical, empirical, and practical level.

Also see: Feuds in the nonprofit/NGO/charity world

Research Explaining How Websites Encourage Volunteering & Philanthropy

graphic by Jayne Cravens representing volunteersMost practitioners in volunteer management and community engagement don’t have time to review academic literature to see if there might be information that’s helpful in their work – and even if they do have time, academic language can be inaccessible for non-academics. I try to read as much as I can and then summarize and pass on the information that can help practitioners in their work, or even just give them ammunition for a project or funding proposal.

Below are links to two academic papers that are worth at least a skim by anyone trying to use web sites to encourage philanthropy, including volunteering. The reference lists at the end of each papers are gold mines of research for further reading:

Persuasion in Prosocial Domains: Explaining the Persuasive Affordances of Volunteering
by Peter Slattery, Patrick Finnegan and Lesley Land, all three of the Australian School of Business, UNSW Australia, and Richard Vidgen of Hull University Business School, University of Hull, UK. Presented at the Twenty Second European Conference on Information Systems, Tel Aviv, 2014.

Abstract: As technology becomes increasingly pervasive and invasive, it increasingly facilitates and instigates behaviour. Prosocial behaviours, such as volunteering, activism and philanthropy, are activities that are considered to be particularly beneficial to others. Prosocial behaviours are important within IS as: (i) they are encouraged by IS stakeholders including volunteering organisations and charities, and; (ii) they contribute to tackling social issues. However, while information technology is poised to become increasingly important for facilitating prosocial behaviour, little is known about how digital artefacts can encourage it. To address this research gap, this study seeks to explain how website features persuade in prosocial online contexts. The study uses the Repertory Grid Technique (RGT) to examine individuals’ experiences of persuasion on live volunteering websites. The analysis reveals that ease of use, trust, and creating positive emotion are important factors in persuading users to volunteer.

Examining How Perceptions of Websites Encourage Prosocial Behaviour
by Peter Slattery, Patrick Finnegan and Richard Vidgen of Australian School of Business, UNSW Australia. Presented at the Thirty Seventh International Conference on Information Systems, Dublin 2016.

Abstract: Organisations are increasingly reliant on information and communications technology (ICT) to encourage prosocial behaviour (i.e., volunteering, philanthropy and activism). However, little is known about how to use ICT to encourage prosocial behaviour. Given this research gap, the objective of this study is to outline and test a research model that assesses the role of specific perceptions of websites in encouraging prosocial behaviour. To do this, we review the literature to derive a theoretical model of relevant perceptions. We then test the extent to which this model can predict participants’ volunteering and philanthropic behaviour subsequent to their usage of a website that encourages prosocial behaviour. The findings are expected to contribute by (i) giving insights into how perceptions of websites encourage prosocial behaviour, (ii) explaining the roles of negative and positive affect in ICT domains, and (iii) developing a “persuasiveness of website scale” to help IS researchers to measure this construct.

In addition, Mr. Slattery’s 2016 PhD thesis is Explaining How Websites Are Used to Encourage Volunteering and Philanthropy. The thesis restricted from public access until March 2018, but some of its research is repeated in the aforementioned papers.

Also see this list of research and evaluations of virtual volunteering, as a practice in general or focused on specific projects, on the Virtual Volunteering wiki.

UN, NGO efforts to counter hate

UNLogoOn December 2, 2015, the United Nations Alliance of Civilizations (UNAOC) held a Symposium on Hate Speech in the Media, with senior officials calling for a global mobilization of citizens to help counter messages that promote xenophobia, violent extremism and prejudice. The symposium was the first of a series that UNAOC will host, called Tracking Hatred,. The next symposium will be held in Baku, Azerbaijan, in April.

The UN Counter Terrorism Executive Directorate (CTED) also organized two days of panel discussions later in December, a collaboration between the public and private sector, called “Preventing Terrorists from Exploiting the Internet and Social Media to Recruit Terrorists and Incite Terrorist Acts, While Respecting Human Rights and Fundamental Freedoms.”

@unaoc, @friendsunaoc, @UN-CTED and other agencies, UN and non-UN alike, are using #SpreadNoHate and #Reclaimingtheweb on Twitter to promote messages from these efforts. I’ll be using them as well, as appropriate, often.

Cristina Gallach, UN Under-Secretary-General for Communications & Public Information, said during the UNAOC event, “Hate speech has been with us for a long time. We will never forget the slaughter of over 800,000 Tutsis and moderate Hutus during a brief three month period in Rwanda in 1994. We will never forget either the six million Jews plus five million others who perished because of one hateful vision… Today, however, more than ever, individuals are using hate speech to foment clashes between civilizations in the name of religion. Their goal is to radicalize young boys and girls, to get them to see the world in black and white, good versus evil, and get them to embrace a path of violence as the only way forward.” She wasn’t just referring to Daesh (also known as ISIL or ISIS), though they are the most high-profile right now and, therefore, they were the primary focus of this event.

From what I’ve read about the symposium, there were lots of comments by speakers about enforcing laws that prohibit incitement of hatred or violence, and about social media companies being compelled to quickly delete content. I’m wary of this kind of talk, as governments use cries of “hate speech” to arrest people that are critical of the government or a religion, such as this 14-year-old boy in Turkey, or these teens in Egypt, or Raif Badawi in Saudi Arabia. I much prefer strategies focused on communications activities that establish and promote a narrative that pushes back against hate and prejudice, and was glad to see that strategy as a focus of two of the CTED panels, one called “privacy and freedom of expression in the digital age” and another that I am very interested in, called “Use of Internet and communications technology for counter-messaging purposes” – the link goes to webcast of the panel, moderated by Steven Siqueira, Acting Deputy Director, CTITF Office- UN Counter-Terrorism Centre (UNCCT) – so wish there was a transcription from this panel! If you want to listen to just a bit, here’s my absolute favorite: go to around the 14:00 point and listen to Humera Khan, Executive Director of Muflehun – she gives realistic, practical advice on mobilizing youth to counter online messages of hate. And then listen to Jonathan Birdwell of the Institute for Strategic Dialogue, right afterward, talking about teaching young people to critically engage with what they read online, and the importance of digital literacy. And then jump to around 36:00 and listen to Abdul-Rehman Malik, who has a provocative, assertive, right-on challenge to governments on this subject. The questions and answers after these three present is worth your time as well. The entire session lasts about 90 minutes, and is really worth your time to listen to (please, UN, release it as a podcast!).

I hope the people involved in these UN and civil society efforts know that, in the last 24 hours, Muslims on Twitter have hilariously trolled a Daesh leader’s call to violence – humor is a powerful tool in fighting against prejudice, and these tech-savvy Muslims are doing it brilliantly. I hope they know about online groups like Quranalyzeit and Sisters in Islam, tiny organizations doing a brilliant job online of countering extremist messages regarding Islam, and doing it as Muslims and from an Islamic perspective. Or about Mohamed Ahmed, a middle-aged father and gas station manager, and one of many Muslims in Minneapolis, Minnesota frustrated by Daesh’s stealthy social media campaigns, and countering it with a social media campaign of his own, AverageMohamed.com.

AND I HOPE EVERYONE KEEPS TALKING. Because I think they are talking about activities and messages that will really work in stopping the violence, and will make all aid and development efforts – about water, about reproductive health, about agricultural, WHATEVER – actually work, actually be sustainable. I so wish all of these efforts were getting more attention online, in traditional media, among all United Nations agencies, among NGOs, and among politicians.

Also see:

Propaganda for good (blog)

Recommendations for UN & UNDP in Ukraine to use Twitter, Facebook, Blogs and Other Social Media to Promote Reconciliation, Social Inclusion, & Peace-Building in Ukraine (PDF)

Reconciliation (a blog of frustration I wrote while working in Ukraine in 2014)