Social Media, Influence, and Democracy: Reconcilable Differences?
Did you know that about one in five US adults say they get their news primarily through social media? However, in addition to news, social media platforms have become incredibly efficient tools for spreading propaganda and disinformation credited with interference in elections in US, Mexico, France, Philippines and the Brexit referendum. The consequences of influence campaigns include partisan divide and increased violence against individuals, ethnic groups and religious minorities.
If information is the currency of democracy, and if exercising an informed choice is its backbone, then what is at stake is nothing less than the future of democracy itself. So how do you counter false narratives and influence campaigns? Our guest is Alicia Wanless is a non-resident scholar at the Carnegie Endowment for International Peace and PhD Researcher at King’s College London. Alicia conducts content and network analysis to research how we shape and are shaped by a changing Information environment.
Guest: Alicia Wanless
Hosted by: Alexa Raad & Leslie Daigle
Related material:
- https://www.journalism.org/2020/07/30/americans-who-mainly-get-their-news-on-social-media-are-less-engaged-less-knowledgeable/
- https://carnegieendowment.org/experts/1795
Transcript (Beta):
Edited for clarity and length.
Alexa:. Now more so than ever, social media platforms, such as Twitter, Facebook, WhatsApp, Instagram, and the like have become an immediate and accessible source of news and information. Consequently, about one in five US adults say they get their news primarily through social media according to a research survey published by Pew Research in July of 2020. In addition, these social media platforms have become incredibly efficient tools for spreading propaganda and disinformation organized influence campaigns have leveraged these platforms as uber efficient vehicles for spreading their message on topics as varied as politics, health, and culture.
Influence campaigns have been credited with interference, not only in the US elections in 2016, but also the 2012 elections in Mexico, the 2016 elections in the Philippines, the 2016 Brexit referendum, the 2017 elections in France, and so on. Some of the consequences of influence campaigns can be seen in the deepening partisan divide in the United States, the increased violence against individuals, ethnic groups, or religious minorities, as well as resistance to vaccination efforts in the midst of a raging pandemic.
To counter the false narratives propagated by influence campaigns on social media requires research specifically research on how influence operations are initiated and spread, gauge which ones are gaining traction and with whom and what impact interventions could have. And why yet academic research on influence operations have thus far been stymied by inadequate access to the data the companies possess. If information is the currency of democracy and if exercising and informed choice and its backbone, then what is at stake is nothing less than the future of democracy itself.
Leslie: Our guest today is Alicia Wanless. Alicia is the director of the partnership for countering influence operations at the Carnegie Endowment for International Peace. She researches how people shape and are shaped by a changing information space. Alicia conducts content and network analysis, and has developed original models for identifying and analyzing digital propaganda campaigns with more than a decade of experience in researching and analyzing the information environment, focusing on propaganda and information warfare.
Alicia applies this learning to support government, military, and major tech companies to develop policies and integrate information activities into training programs that better reflect how the information environment is manipulated. Alicia is currently a PhD researcher at King’s College London exploring alternative frameworks for understanding the information environment. Her work has been featured on the CDC in Forbes and the strategy bridge.
Welcome Alicia.
Alicia Wanless: Thank you for having me. I absolutely love this name. TechSequences,
Leslie: I’m actually really good at coming up with horrible, horrible names. I guess one out of every thousand ideas actually work!
Alicia Wanless: I just loved the idea of it because so much of what happens when, you know, humans introduce a new piece of technology that changes how we engage with information. Has these like cascading consequences there’s seldom considered. And I just feel like that name is so perfect for encapsulating the time we live in and these like unforeseen things that continue to happen.
Leslie: Alexa and I, when we started this up, we were thinking about all the things that we had seen and having been around, not at the Dawn of time, not even at the Dawn of the internet, but , having seen some of it being developed, it’s like, yeah, you know, this is really not where we thought it was going to go.
And, it’s important to look at the different facets of it. But why don’t we set the scene here for today’s discussion with some definitions. So are there any differences between disinformation campaigns and influence campaigns? And if so, what are they?
Alicia Wanless: So an influence operation is the coordinated attempt to affect an audience or an outcome for a specific aim.
And within that rubric, many things can, can happen and be done. One tactic that is rather common, but doesn’t have to be, there is the use of disinformation, which is. Essentially spreading misleading or false information intentionally.
Leslie: So what is a counter influence operation? What are some examples of that kind of activity?
Alicia Wanless: I would start out by saying, I’m not sure that counter influence operations is a term that’s used so much. And it’s not necessarily a very well-defined than what would fall under that. In terms of the name of our project. I think when we were pulling it together, there was some desire among decision makers who were consulted on this, that we needed to make it very clear that we weren’t working on influence operations, that we were trying to look at measures that might. Address them or counter them in some way.
And thus this term crept in there. Typically anything that’s done to like counter an influence operation is something that would be aimed at hindering it or refuting it downplaying it. And that could include a broad spectrum of things. Really, it might be best thought of at the points of intervention.
So for example, social media companies can intervene and add a policy or tweak their platform to try to curtail the spread of something or take it down entirely a government might come up with regulation or laws outlining what’s legal and what’s not legal. So for example, you brought up earlier, the use of influence in elections. And, and it’s almost easier to say that influence operations are always present during elections because somebody is running them guaranteed. Whether it’s the politicians seeking power and lobbyists to have an interest in, in moving something forward, or as we’ve seen foreign governments meddling.
And so there’s a broad array of actors but. In that context, like a government to come up with laws to determine who can actively campaign during, during an election. There’s also other points of intervention, like education, right? Media literacy, and teaching people about the information that they’re consuming and how the information environment works.
And then under that, a lot of the, the work really has been focused on fact checking more than anything else. So going through and verifying, whether something is accurate and then surfacing if it isn’t and then feeding that back into the platform. So the social media platforms rely heavily on third-party fact-checkers to moderate content.
So it’s quite a broad range of things that could be done.
Alexa: So you named a couple named social media platforms, so are doing countering influence operations. We talked to the founder of Reality Team, or one of our podcasts,Deb Lavoy, who try to, based on your definition, really counter the influence operation, particularly disinformation by getting in at the very top level of the funnel, if you will, and trying to counter it with facts who, what other entities are really engaged in.
Countering some of these campaigns and how effective have they been and how effective have social media companies been?
Alicia Wanless: All, really good questions, difficult questions. So again, it depends on how broad we take the definition here of countering because all of these contribute in a way, right?
Like even research and surfacing, who’s behind influence operations, especially if they’ve obfuscated their origins. That’s also a form of countering because it sheds light on things. And so when we went to look at the field that’s really researching and countering influence operations more broadly, we found about 460 initiatives worldwide and they range they can include things like the fact checking and media support and media organizations, where a lot of the fact checking sits.
They can also include, there’s a smaller number of tech companies. Like the one you mentioned or developing solutions, trying to automate the processes, trying to automate identification of things. Some of them have gone further to create games. So gamifying it, trying to teach people to spot disinformation, fake news, things like that.
There is a broad array of actors who are engaging in counter measures depending on what they are. And they can be, you know, private sector academia, some governments are in it. And, and other researchers in terms of efficacy of what’s being done, actually little is known. We have a study that’s coming out shortly that was conducted by our friends at Princeton’s empirical study of conflict. And it looks at the literature around efficacy of interventions. The vast majority of literature in that space focuses on fact checking, not surprising considering it does make up the bulk of, of countermeasures to influence operations. But very little in terms of understanding interventions that are made by social media platforms.
And when we took a look at the publicly disclosed interventions that platforms have made, we’ve also found that they’re not telling us whether they’re measuring that as well. So we have a very, very limited understanding of what works and what doesn’t and add to that. The research around that tends to get done, usually in a very controlled kind of experimental space, right? It’s not out in the wild as people are normally experiencing and permission, and that’s really hard to achieve. And of course we couldn’t do that type of research without collaboration from the platforms anyway, because you’d have to have ongoing access to data to be able to measure a change after you’ve implemented some sort of intervention.
So it’s a big gaping hole right now in our understanding.
Alexa: So I understand in terms of countering it, what are you measuring? Are you measuring if, you’re injecting some truth into the information sphere to counter the information that’s out there, or are you measuring the impact on people’s beliefs? So if more and more people are now convinced, for example, that a vaccination vaccinating for COVID is a good idea. Where are you measuring for efficacy?
Alicia Wanless: So that would really depend on the way that the study is designed and what’s being looked at and with such a broad range of potential interventions, it’s really hard to say that there’s this one way and that’s how they’re all going to do them.
So some of them may rely on surveys too, to understand what the user reports, in terms of having been presented with different types of information over time, or if they’ve been presented with something corrective, has that changed their perspective. And so that’s probably not the best way because you’re reliant on people to be honest, And actually understand.
How they process information themselves, which I would argue probably most of us aren’t really that well aware. Some of them may measure things like behavior, right? So kind of the best types of studies might look at behavior changes over time. So you’re not guessing what’s happening in people’s head.
So did they stop going to, you know, dodgy websites to access information after they’ve been treated? That these types of warnings over time did that change their consumption pattern to, to seek out other types of information that might be deemed more reliable? Are the people being exposed less to, you know, Spanish, bad content over time.
And these are all things that generally, if you’re going to study them, you have to a, have a baseline for it. So what was the state before you made the change and then how are you going to measure that after it happens? And that can be really tricky to do, especially with social media data.
Leslie: I was going to say that, You know, in making the distinction between influence and disinformation campaigns, it’s sort of.
Disinformation. I think it can largely be agreed is a bad plan, right? This is stuff that should be stopped. But as you pointed out, I mean, elections are about influence. I mean, we’re always trying to influence, so influence in general is not necessarily a bad thing, but there are parameters or should be parameters as in who is applying the influence.
Are they, you know, are they legitimate actors in the, in the discussion and in the context of elections, non, non, you know, Entities from elsewhere. Shouldn’t be should not be meddling.
Alexa: Quite legitimate motives.
Leslie: Yeah.
Alicia Wanless: So you’ve just touched on the fact that my topic is one big Swiss cheese slice.
There are so many holes and so much work to do that we should all be happily and gainfully employed for years to come. Yeah. So right now, when it comes to influence operations, we don’t exactly have a great. A framework of objective criteria to delineate what would make one acceptable and the other unacceptable.
There are certainly some things that are pretty apparent that we tend to agree on at least at a surface level. So many people agree that disinformation is, is. Not really great for our information environment, but then when you start to get into the weeds on that and who determines what is actually true and what isn’t and how do you discern the malign when it comes to political opinion?
Because a lot of it may not actually really be rooted in fact, but how people feel and how they perceive the world. And so with. That it slides pretty quickly into this, you know, debate over, over definitions. And so one of the things that we desperately need is to find these kinds of objective criteria.
Like what are they, are they, you know, drawing from human rights codes and saying you can’t, you know, push violence and hatred against us. Specific people are we going to want as a society that people who especially have a big culprit for reaching others have to really disclose who they are and maybe be held to a higher account.
Should they run a foul when they do, you know, mislead or say something incorrect? Does it matter that they do it intentionally? If they’ve got a greater responsibility in terms of their audience, these are all things that we haven’t really been great in most Western societies in terms of like articulating and, and then putting into practice to measure against.
Leslie: But I wonder if in some sense. We haven’t seen some of that because the problem itself is not necessarily new. You know, right back to the, the whole issue of having an informed population to have a democratic a democratically run country. But, but the issue is that the technology of the internet and the platforms that run on it, I mean, everything goes a lot faster.
Everything has a much further reach. It certainly crosses borders. And, and so I guess part of the question is whether, whether there are lessons to be learned from previous efforts in structuring, you know, what is legitimate in, in in what is considered A properly run election in the U S is, is largely based on trying to manage some of that by identifying you know, who are the rightful actors.
You should not have campaigns run by people from outside this country. Are there lessons that can be learned and equally applied at scale?
Alicia Wanless: Definitely. Let me just say I’m a big proponent of looking to the past and trying to draw from, experiences in different areas and also in different fields.
One thing that drives me quite nuts in this space is that there’s an entire body of work around countering violent extremism that preceded the interest in influence operations and propaganda. And they went through many of the same problems that we face now. And it’s directly related because what is extremist recruitment, if not a big influence operation to get people on their side or instill fear to affect decision-making. So we do need more of that writ large. There are some things that we could look to that may be very easily adapted. There was one piece that Jonathan Herman wrote for us for our policy proposals project that looked at borrowing from radio broadcast and licensing around that to the social media model. And basically, what he was arguing was that, if a person starts to hit a certain threshold of followers, whether those be friends or otherwise, why wouldn’t the next course be get them to do a bit of a test so that they demonstrate that they understand what the operating rules are of being on that platform.
And when they run afoul of that, you just take them down because they’ve agreed — they’ve shown that they know how this works. They know what it is. It doesn’t matter whether they’ve done it intentionally. If they speed on the road and they get caught, there’s a rule, right. They get a ticket, and we don’t even have that kind of basic set of consequences to really guide how we govern this space.
So yes, there are many things that can be drawn from the way that we’ve tried to deal with other aspects of the information environment in the world that really could be adopted here.
Alexa: When you’re talking about the example you just mentioned is really one of a unilateral decision, right? This is what Facebook did, what Twitter did with taking the former president off and some of the other right-wing extremists off their platform. Yet from what you’re describing, it sounds like you’re saying these organizations, social media platforms should have a framework based on how wide the reach is of a particular voice is. That at that level, some rules may apply. But I think Angela Merkel, the German chancellor, her point was that that really shouldn’t be the decision of Facebook to do or Twitter to do, but really should be the decision of the regulators, with a predefined framework. So which one really is a better idea?
Alicia Wanless: So I, I fundamentally do not believe that this should be left to the companies. However, in the absence of other leadership coming from governments and elsewhere, this is in fact what’s happening, right? The pressure is on social media to do something because, for better or worse, they’re the one point of intervention where something is actually happening. So I don’t think that they should be, I think that they are, and if they are, then, then those rules need to be very clear and they need to be disclosed. They need to be something that can be followed. They shouldn’t be something that’s arbitrary and because there’s massive outcry or pushback because they made a decision.
That’s not a good enough reason to go back on it. And I think that that kind of flip-flopping, that lack of clarity, those lack of articulated rules actually makes for a far worse operating environment than we could have. So again, I do think this is governments with input from tech companies to explain how things work and have transparency reporting and what kind of data they have available, et cetera, civil society should be weighing in for how these decisions will affect other aspects of the information environment, but then also we need to be engaging academics who have the expertise on key areas of that and bringing it all together and eventually taking that back out and educating the populace on what it’s like to live in an information age. I mean, we’re missing all of this systemic kind of response at this point.
Leslie: I want to be clear that I think that, that anybody using these platforms for fomenting the kind of activity we saw on the 6th of January is wrong and should not be perceived as legal or acceptable. But if we say that the platforms themselves should not be deciding who is, or is not permitted to use them, then I think we’re saying that they are somehow a fundamental right, as opposed to a private company.
So I don’t know if a different perspective would be, there should be agreed on laws of what is or isn’t is, or isn’t allowable behavior. And the company held to account. If they don’t remove people who are effectively at that point, acting illegally on the platforms.
Alicia Wanless: It’s never as simple as either or increasingly in this space. I find that for every problem that is raised thinking through the solution of that there’s unending unintended consequences. This is why I like TechSequences!
So. When it comes to social media platforms, deciding unilaterally, I mean the whole 2016 and other episodes like Myanmar with the army faking Facebook pages to orchestrate and attack the minority group – these things have put pressure on the company to start to delineate lines in the sand and then determine who should have access or not, or what kind of behavior is acceptable.
It is possible that private companies who have built that space can unilaterally decide what happens on their platform and kick anybody off who disagrees or doesn’t do anything about it. There’s nothing that mandates them from having to have those people there. And in many cases, they’re not selling a service directly to those users.
So the obligations are not high there. At the same time, it’s really hard to govern an information environment like that. Even if you have the community and it’s somewhat gated, it’s still difficult. The bigger the scale of the platform, the harder it is to achieve that. I would bring up some analogies from the past that tell me that this is a problem that is impossible to solve entirely.
When the printing press was invented people were moaning, the fact that there was such a rise in information that nobody could possibly read. We also had endless problems with pamphlets stoking popish fears in England, for example. There was a lot of misinformation and sensationalism that was being moved through this new print product.
And this caused a lot of alarm, but you know, in England in the period of the English civil war, for example, Charles had pretty strong control over printing presses within London. But even then he couldn’t control those. I mean, we’re talking a handful. But he couldn’t control the information environment because those printing presses could operate as they wished in Scotland and flood the market with their own content. It could come from the continent and there was an access of information that was like bringing that all in and back to England. And it was really difficult in a very small place where you think that you could probably control most of what’s coming out.
So. I don’t know. I think maybe we look at this problem incorrectly. Like we think that there’s going to be a happy utopian end state where everything balances out when really what we need to have is a very frank discussion for what the role of information and influence are in society in democracies. And be maybe a little bit more honest with ourselves about how underlying it is in our economic model.
Leslie: So that touches on two areas that I sort of wanted to go. One the whole notion of selling is heavily based on influence at this point. Two, you’ve talked about what you think might be reasonable in terms of effectively different levels of license or agreements for how to handle yourself on these platforms if you have very large scale following. But, the flip side of that is to what extent is it necessary to have that kind of presence today in order to be able to be successful in the real world, in real life.
And I’m using scare quotes, right? I mean there’s the flip side of it as well.
Alicia Wanless: Absolutely. I mean, I, there’s a, there’s a good book Luciano Floridi wrote on the fourth revolution and it’s this whole idea that we’ve moved into this information age, and most of the countries that are in that don’t actually understand what that means.
And essentially, we are utterly dependent on information, communication technologies for every aspect of our wellbeing. Our society is like completely driven and dependent on it.
That’s a mental shift of realizing that there is no online and offline, right? This is what our world is. It is extremely interconnected. It is a globalized information environment. We’re exposed to information 24/7. We’re carrying it around in our pocket. And not only that, it’s not just passively coming at us. It’s also tracking us. It’s putting together pretty detailed snapshots on who and what we are and our preferences and where we go, what we do.
We’re also engaging in it. We can become propagandists ourselves. When we feel that a message or an issue resonates with us, we will very actively get involved in sharing that potentially online. And I’m not sure a person could entirely be successful in this society without being online in some capacity.
At this point in time, it may not be social media, but email, computers, online banking, getting a job. All of that will take place in a digitally mediated environment.
Alexa: Correct. I think your example of looking to the past is an interesting one, because if you think broadcast journalism, for example, the people who have a lot of followers and who were extremists have also threatened that they will start their own platform.
So do you think that some of these extremist groups and people who have been kicked off these platforms now have an information economy, where eyeballs equals money equals power, can go and create their own alternative channel. And if that happens, then what?
Alicia Wanless: Fortunately, it’s extremely expensive to create a social media platform from scratch and the revenue model is heavily derived from having a high volume of users. So the bigger question is …
Alexa: advertisers, right? Because users don’t pay. So you have to attract advertisers.
Alicia Wanless: Exactly. And so the user base is what attracts the advertisers. Being able to target them is how is how they derive most of the model for revenue. And so building that from scratch is extremely difficult, extremely expensive. The question would be whether they can get a user, an app, like an advertiser base that would be willing to put money in.
And because there’s so many other flip campaigns that would be attempting to silence that — it may get both more attention, but also struggle to find advertisers who don’t worry about the taint of advertising on that platform. So I think it’s quite unlikely that this will happen in the short term.
I think the bigger question, looking at the divide between left and right in, say, the US is will somebody emerge who is less distasteful than the past leader has been, and some of his supporters. Could there emerge somebody who might be that far right who knows how to not break all the rules, who can still use those platforms and galvanize.
And, and then if that kind of starts to happen, I think he might have more of a possibility that advertisers would come back, that it’s possible to create that. But then the question would be, do they need to really move off the platform if they can manage to operate within the existing rules?
Alexa: We’ve seen a trend of companies, the advertisers becoming more political. This didn’t happen before. They stayed out of the political fray. Now they’re joining in and weighing in on issues like Black Lives Matter. And sometimes against local government, like against the Georgia legislature when they voted in the new voting restrictions. So do you think this is a trend that we can count on and that it would continue? Or do you think this is more of a blip in the larger scheme of where things are headed?
Alicia Wanless: That’s a good question. I think it’s definitely where trends are now. And I am not sure that it’ll go away too soon. It will be dependent on whether the movements and the people that push companies to respond that way, continue to be active and have a collective voice. So I think that’s probably in the short term likely to continue so long as there are serious underlying social problems that are not being addressed by politicians in particular.
Leslie: Oh, well!
Alexa: I guess if you can’t vote them in or vote them out, you can buy them in or buy them out.
Leslie: Yeah, I guess I’m just not optimistic about actually addressing all of the social problems since there are so very many of them that have for so long been so in need of addressing.
Alicia Wanless: And that’s the thing, I worry less about that being a trend that persists because there’s potential for that to make the world a better place. What worries me more is the juxtaposition between the views on the right, the views on the left, the in compatibility of the two to be together anymore. And at some point there’s likely going to be a backlash either way, right?
You swing too far to the right. You’re going to get a backlash. You swing too far to the left. Same thing happens. And so, I’m more worried about the compatibility of these worldviews existing in a single country and from my thesis that never really ends well.
Alexa: What do you mean? Well, how can it end?
Alicia Wanless: In conflict. I’ve been looking at conflicts from every major media, epoch in history for the PhD thesis. So starting in ancient Greece up to the present time, the last case that he’s going to be Ukraine in the digital age. And one thing that really strikes me is there tends to be this introduction of a new technology that changes how we deal with information. You’ll get information floods as a result. So an increase in volume of outputs along with that seems to always bring information pollution as well and different actors using that. And you always seem to have different types of entities trying to influence the information environment, including profiteers who’re just going to do it to make money.
With that scenario there tends to emerge a new idea and it brushes up against people who are holding an old idea. And as, as this goes forward, it becomes less and less compatible until conflict ultimately erupts, and they can’t even see each other’s viewpoints. There’s something so reminiscent about the time that we’re living in, as I’m going back through those, right?
Like we’ve had the Internet and the web for twenty-five years and less than that, when you count social media, And we’ve got this massive uptake in information, massive uptake in information pollution. We have new perspectives too, that are saying we will not live with the inconsistencies, the cruelties and all of these other things that have been so prevalent in society.
And you have others who don’t agree. And so those two worldviews, I don’t think can co-exist for long together in this kind of environment.
Leslie: To what extent do you think the technology really enforces that sort of need for divide? Insofar as a hundred years ago, it was possible to have US president who was largely crippled and nobody knew it, to the point where now everything is a photo op or an interview opportunity. So on and so forth to now, people are posturing on their social media platforms. So once you are in a particular camp, you can’t go against party line at any level because you’re always having to be putting out the memes, putting out the phrases to , impress your following.
So do you think that there’s an element of easy access to these platforms for extending your influence, causing a certain amount of encasing of those positions that makes it really hard to ever back down or, or compromise?
Alicia Wanless: I’m not sure. I do think that there is an aspect of performance in which almost everybody now is engaged in. And it’s not just politicians. I mean, on one hand to me, the politicians, the shift for them, and this is, this is like not just social media, but I think it’s also a facet of cable TV, and always on, always on television.
It is definitely a progression from how the information environment has developed. And that is that politicians have become celebrities. They’re no longer politicians per se, just politicians: they’re behaving like celebrities. They’re engaging with celebrities. They’re courting this type of work.
Our campaigns have turned into a perpetual popularity contest. They never end. So it’s always there. And I think that’s been probably one of the biggest shifts that has been brought on by social media more than anything else, social media and cable news, cable TV.
But I would say too, that that’s happening across the board. I mean, if you look at experts in any given community, go to Twitter and watch them the need for hot taking and constantly sharing commentary and critiquing each other is also, I think, a symptom of that same issue. And I’m not sure where it will lead us.
Does it mean that people won’t feel like they can back down because there’s this permanent trail of everything they said? I hope that isn’t the case because now more than ever, I think we really could use some empathy and understanding. We may not agree with the other’s perspective, but if we can’t even so much as have a conversation, understanding their viewpoint, we won’t get to any kind of agreeable end.
Alexa: That reminds me. I dug up this quote when I was researching this episode, by John Adams, and it’s somewhat chilling. He says, “Remember, democracy never lasts long it soon wastes, exhausts and murders itself. There never was a democracy yet that did not commit suicide”. You know, those are not terribly uplifting words.
So if we don’t want to go there, when you’re talking about a very partisan divide and people cannot even stand to be in the same room with one another or hear each other’s points of view, where do we go from here? Who has the biggest role to make a difference in this fabric?
Alicia Wanless: In an ideal world, governments would start to show an interest beyond just continuing to maintain power. And this is part of the problem, right? Without solid political leadership, that’s invested in finding ways of safeguarding the integrity of the information environment, not just to get office again, but for the betterment of society, I’m not sure how far we can go if we don’t have that kind of leadership.
Because again, we’re dealing with the big players that are international. The tech companies, we keep coming back to them because they are the elephant in the room. They’re not even the elephant in the room. Everybody knows they’re there. Everybody knows they’re really important, but there are companies that work around the world internationally, and they’ve essentially been left without any kind of legal framework to govern how they should be governing this space.
And that’s extremely tricky in a single country, much less trying to do it around the world.
Alexa: Do you think that there should be a multilateral framework developed between let’s say, EU or the US sort of the greater superpowers to govern these social media companies.
Because some of them have already said, look, we want to be governed. We want to be regulated.
Leslie: I don’t, I don’t even think it’s the social media companies. I think, I think what Alicia was getting at even was. You know, what are the right things for the people that the social media companies are, the tool that are amplifying things and, and, and making things go faster.
Alicia Wanless: So where do we go from here? I think we really need to have a frank discussion of what actually fosters democracy; what does our modern information environment mean and entail? And then from there, what are the things that we could tweak reasonably that would help foster democratic thinking and participation and engagement?
Because right now, the way that it’s set up, I don’t think it’s necessarily geared at that. It’s a very traditional model where a bunch of people who say they want to have power or try to use the machinery of an outgoing message to get people on board, to agree with them. And the people are this vague idea that can just be shaped and manipulated into coming along and to a large degree, yeah, they can. But then that’s a huge vulnerability for democracy where the ability to make an informed and free decision is essentially where you drive your legitimacy. And then we’ve got a whole other series of questions here that come to the fore. That will be really uncomfortable.
Alexa: What you said makes me want to quote Jefferson. He said this in 1788: “Whenever the people are well-informed, they can be trusted with their own government, that whatever things get so far wrong as to attract their notice, they may be relied upon to set them to rights”. I think that’s what you’re saying.
Alicia Wanless: Okay. Yes. I think when it comes to the space around influence operations, so often we have this idea though, that if you just give people the truth, that somehow they’re going to follow it. But that’s not necessarily good enough, because it’s not just about delivering them some facts it’s also about educating them about what their roles and obligations are to society.
This isn’t just about voting every few years. You have a civic duty. If you’re part of a democracy that you should be trying to stay informed, trying to understand how things are run. And I don’t think we really have that culture neither in the US or in Canada, where I am.
Leslie: Earlier when you were outlining what is an influence campaign and how to measure it and how to influence the influence campaign, I had an image of mutual escalation and, and escalating battles. And, I think what you’re saying is rather than using the technology to just make a louder pushback on any individual activity or any individual thing that’s wrong, let’s have a discussion and figure out what do we really want out of democracy and how do we implement that, given the tools and the environment that we have? And I have to say, I can buy into that theory. That sounds like a positive future to me.
Alicia Wanless: Absolutely. Absolutely. And I think it is so much easier to go and look at the specific symptoms of a problem, right? So we see all this bad content that we don’t like, we see bad behavior that we don’t like, and we get really fixated on that, but really it is just a symptom of an overall systemic malaise that we have. And until we start to look at that as a systems problem, we aren’t going to make much headway.
Leslie: Well, it’s been fascinating and hard to believe that we’ve already zipped through a half hour or more!
Thank you so much for joining us for this conversation today.
Alexa: Thank you, Alicia.
Alicia Wanless: Thank you for having me.
Podcast: Play in new window | Download
Subscribe: Apple Podcasts | Spotify | Android | Blubrry | RSS