Social Media Intelligence SOCMINT Training Course 20th-21st Nov 2019

Social Media Intelligence SOCMINT training course

The 2 day Social Media Intelligence SOCMINT training course will equip delegates with the practical skills and confidence to search across key social networks to identify influencers, monitor key topics and extract actionable intelligence to assist decision support. The Social Media Intelligence training is also available as an onsite and client customised private course. Please contact the course administrator for details.

Who should attend:

The programme is particularly suited for :

  • Analysts
  • Policy officers
  • Information professionals
  • Researchers
social media intelligence socmint training course
All delegates will complete the training course with their own SOCMINT analysis dashboard customised to their work requirements

Why you should attend:

  • The practical and straight-forward training will turn the complex into the simple and equip delegates with the practical skills needed to improve productivity and gain key information to assist decision support
  • All the social media channels and digital tools we use are free and easy to use – they are cloud services and can be accessed after the training on any PC, laptop, tablet or mobile
  • Gain hands-on, practical skills and the confidence to immediately use Social Media Intelligence SOCMINT in everyday work
  • Understand how Twitter, LinkedIn, and Hootsuite can be used as well as a range of additional digital tools
  • Create an initial social media management dashboard (using the free version of Hootsuite) that can be used immediately for everyday work and enhanced over time
  • Learn how to identify influencers, enhance information gathering and improve decision support

What are the benefits:

Two days of practical, applicable, hands-on training that can be applied immediately.

  • Increase productivity and access to knowledge by taking advantage of social media intelligence
  • Confidence and empowerment based on acquiring practical hands-on skills
  • Understand how Twitter, LinkedIn, and Hootsuite can be used as well as a range of additional digital tools
  • Create an initial social media management dashboard (using the free version of Hootsuite) that can be used immediately for everyday work and enhanced over time
  • Learn how to identify influencers, enhance intelligence gathering and improve decision making


Social Media Intelligence SOCMINT training course
Delegates will learn how to use social media geo-location tools

What you will get:

  • Two full days of social media intelligence SOCMINT training
  • Lunch and catering included
  • A hand-picked digital toolkit directory for future reference
  • One month help-desk support by phone or e-mail after the training

Course agenda

Social Media Intelligence SOCMINT  Training Course (2 Day)

Day 1 – 9.00am to 6.00pm

  • Introductions – you, me, what do you want from the training?
  • Latest statistics, trends, demographics, and research
  • The importance of listening & the importance of content
  • Best practice social media guidelines & behaviour, how to build an online reputation, do’s and don’t’s
  • Overview of the main social media channels and digital tools and how they can be used for social media intelligence
  • Practical Session – Twitter: Creating a personal Twitter profile, settings, Twitter search, identifying influencers, Twitter lists, creating and sharing content, monitoring specific topics, discreet monitoring of individuals, growing followers, Twitter ads, using Chrome extensions to assist productivity
  • Practical Session – LinkedIn: Creating and optimising a personal LinkedIn profile, settings, advanced search, discreet advanced search, LinkedIn groups, LinkedIn ads, recommendations
  • Practical Exercise 1: Using Twitter for social media intelligence
  • Practical Exercise 2: Using LinkedIn for social media intelligence
  • Q&A – End of day one

Day 2 – 9.00am to 6.00pm

  • Group Discussion – Recap of learning from Day 1
  • Practical Session – Hootsuite: Creating a free personal Hootsuite account, settings, importing Twitter & LinkedIn, creating real-time information streams from Twitter lists, identifying, monitoring and engaging, scheduling content, advanced search, geo-location search, scheduling content, automated analytics, and reporting
  • Practical Exercise 3: Using Hootsuite for social media intelligence
  • Practical Session – Using digital tools: Using a variety of free digital tools, delegates will learn additional skills including how to identify when a target is most likely to be online and also their possible geo-location
  • Demo – Best practice target engagement – Having identified a target, listened to their interests and discovered when they are most likely to be online, how can you engage and influence them?
  • Social media intelligence case studies – Delegates are shown and discuss 4 successful social media intelligence case studies
  • Next steps – Delegates are shown a 4-week social media intelligence plan to help embed learning
  • What have you learned? – Each delegate confirms the top 2 things they have learned
  • Q&A – End of course
  • Any questions?

The Social Media Intelligence SOCMINT training is available as an onsite and client customised private course. Please contact the course administrator for details.

Course Tutor

Andy Black
Andy Black – Tutor for SOCMINT course

Andy Black   Andy has over 25 years’ experience in the software and information services sector. He has worked for companies including Perfect Information, Excalibur Technologies, and Business Objects and for clients including Jane’s Information, Reuters and Clifford Chance.  In 2005 he switched to marketing, PR and communications and has since been Head of Digital for a leading PR firm, managed Honda’s social media monitoring, directed lead generation social media campaigns for Vodafone as well as training B2B sales and marketing teams in digital skills and social selling.

Andy is a contractor to the UK Govt and trains diplomats at the Foreign and Commonwealth Office in digital diplomacy.

Eventbrite - Social Media Intelligence SOCMINT - Training Course

How much of the Internet is fake?

How Much of the Internet Is Fake?  Max Read from New York Magazine offers a fascinating analysis.

In late November, the Justice Department unsealed indictments against eight people accused of fleecing advertisers of $36 million in two of the largest digital ad-fraud operations ever uncovered. Digital advertisers tend to want two things: people to look at their ads and “premium” websites — i.e., established and legitimate publications — on which to host them. The two schemes at issue in the case, dubbed Methbot and 3ve by the security researchers who found them, faked both. Hucksters infected 1.7 million computers with malware that remotely directed traffic to “spoofed” websites — “empty websites designed for bot traffic” that served up a video ad purchased from one of the internet’s vast programmatic ad-exchanges, but that were designed, according to the indictments, “to fool advertisers into thinking that an impression of their ad was served on a premium publisher site,” like that of Vogue or The Economist.

Views, meanwhile, were faked by malware-infected computers with marvelously sophisticated techniques to imitate humans: bots “faked clicks, mouse movements, and social network login information to masquerade as engaged human consumers.” Some were sent to browse the internet to gather tracking cookies from other websites, just as a human visitor would have done through regular behavior. Fake people with fake cookies and fake social-media accounts, fake-moving their fake cursors, fake-clicking on fake websites — the fraudsters had essentially created a simulacrum of the internet, where the only real things were the ads.

How much of the internet is fake? Studies generally suggest that year after year, less than 60 percent of web traffic is human; some years, according to some researchers, a healthy majority of it is bot. For a period of time in 2013, the Times reported this year, a full half of YouTube traffic was “bots masquerading as people,” a portion so high that employees feared an inflection point after which YouTube’s systems for detecting fraudulent traffic would begin to regard bot traffic as real and human traffic as fake. They called this hypothetical event “the Inversion.”

In the future, when I look back from the high-tech gamer jail in which President PewDiePie will have imprisoned me, I will remember 2018 as the year the internet passed the Inversion, not in some strict numerical sense, since bots already outnumber humans online more years than not, but in the perceptual sense. The internet has always played host in its dark corners to schools of catfish and embassies of Nigerian princes, but that darkness now pervades its every aspect: Everything that once seemed definitively and unquestionably real now seems slightly fake; everything that once seemed slightly fake now has the power and presence of the real. The “fakeness” of the post-Inversion internet is less a calculable falsehood and more a particular quality of experience — the uncanny sense that what you encounter online is not “real” but is also undeniably not “fake,” and indeed maybe both at once, or in succession, as you turn it over in your head.

How Much of the Internet Is Fake? The metrics are fake.

Take something as seemingly simple as how we measure web traffic. Metrics should be the most real thing on the internet: They are countable, trackable, and verifiable, and their existence undergirds the advertising business that drives our biggest social and search platforms. Yet not even Facebook, the world’s greatest data–gathering organization, seems able to produce genuine figures. In October, small advertisers filed suit against the social-media giant, accusing it of covering up, for a year, its significant overstatements of the time users spent watching videos on the platform (by 60 to 80 percent, Facebook says; by 150 to 900 percent, the plaintiffs say). According to an exhaustive list at MarketingLand, over the past two years Facebook has admitted to misreporting the reach of posts on Facebook Pages (in two different ways), the rate at which viewers complete ad videos, the average time spent reading its “Instant Articles,” the amount of referral traffic from Facebook to external websites, the number of views that videos received via Facebook’s mobile site, and the number of video views in Instant Articles.

Can we still trust the metrics? After the Inversion, what’s the point? Even when we put our faith in their accuracy, there’s something not quite real about them: My favorite statistic this year was Facebook’s claim that 75 million people watched at least a minute of Facebook Watch videos every day — though, as Facebook admitted, the 60 seconds in that one minute didn’t need to be watched consecutively. Real videos, real people, fake minutes.

How Much of the Internet Is Fake? The people are fake.

And maybe we shouldn’t even assume that the people are real. Over at YouTube, the business of buying and selling video views is “flourishing,” as the Times reminded readers with a lengthy investigation in August. The company says only “a tiny fraction” of its traffic is fake, but fake subscribers are enough of a problem that the site undertook a purge of “spam accounts” in mid-December. These days, the Times found, you can buy 5,000 YouTube views — 30 seconds of a video counts as a view — for as low as $15; oftentimes, customers are led to believe that the views they purchase come from real people. More likely, they come from bots. On some platforms, video views and app downloads can be forged in lucrative industrial counterfeiting operations. If you want a picture of what the Inversion looks like, find a video of a “click farm”: hundreds of individual smartphones, arranged in rows on shelves or racks in professional-looking offices, each watching the same video or downloading the same app.

This is obviously not real human traffic. But what would real human traffic look like? The Inversion gives rise to some odd philosophical quandaries: If a Russian troll using a Brazilian man’s photograph to masquerade as an American Trump supporter watches a video on Facebook, is that view “real”? Not only do we have bots masquerading as humans and humans masquerading as other humans, but also sometimes humans masquerading as bots, pretending to be “artificial-intelligence personal assistants,” like Facebook’s “M,” in order to help tech companies appear to possess cutting-edge AI. We even have whatever CGI Instagram influencer Lil Miquela is: a fake human with a real body, a fake face, and real influence. Even humans who aren’t masquerading can contort themselves through layers of diminishing reality: The Atlantic reports that non-CGI human influencers are posting fake sponsored content — that is, content meant to look like content that is meant to look authentic, for free — to attract attention from brand reps, who, they hope, will pay them real money.

How Much of the Internet Is Fake? The businesses are fake.

The money is usually real. Not always — ask someone who enthusiastically got into cryptocurrency this time last year — but often enough to be an engine of the Inversion. If the money is real, why does anything else need to be? Earlier this year, the writer and artist Jenny Odell began to look into an Amazon reseller that had bought goods from other Amazon resellers and resold them, again on Amazon, at higher prices. Odell discovered an elaborate network of fake price-gouging and copyright-stealing businesses connected to the cultlike Evangelical church whose followers resurrected Newsweek in 2013 as a zombie search-engine-optimized spam farm. She visited a strange bookstore operated by the resellers in San Francisco and found a stunted concrete reproduction of the dazzlingly phony storefronts she’d encountered on Amazon, arranged haphazardly with best-selling books, plastic tchotchkes, and beauty products apparently bought from wholesalers. “At some point I began to feel like I was in a dream,” she wrote. “Or that I was half-awake, unable to distinguish the virtual from the real, the local from the global, a product from a Photoshop image, the sincere from the insincere.”

How Much of the Internet Is Fake? The content is fake.

The only site that gives me that dizzying sensation of unreality as often as Amazon does is YouTube, which plays host to weeks’ worth of inverted, inhuman content. TV episodes that have been mirror-flipped to avoid copyright takedowns air next to huckster vloggers flogging merch who air next to anonymously produced videos that are ostensibly for children. An animated video of Spider-Man and Elsa from Frozen riding tractors is not, you know, not real: Some poor soul animated it and gave voice to its actors, and I have no doubt that some number (dozens? Hundreds? Millions? Sure, why not?) of kids have sat and watched it and found some mystifying, occult enjoyment in it. But it’s certainly not “official,” and it’s hard, watching it onscreen as an adult, to understand where it came from and what it means that the view count beneath it is continually ticking up.

These, at least, are mostly bootleg videos of popular fictional characters, i.e., counterfeit unreality. Counterfeit reality is still more difficult to find—for now. In January 2018, an anonymous Redditor created a relatively easy-to-use desktop-app implementation of “deepfakes,” the now-infamous technology that uses artificial-intelligence image processing to replace one face in a video with another — putting, say, a politician’s over a porn star’s. A recent academic paper from researchers at the graphics-card company Nvidia demonstrates a similar technique used to create images of computer-generated “human” faces that look shockingly like photographs of real people. (Next time Russians want to puppeteer a group of invented Americans on Facebook, they won’t even need to steal photos of real people.) Contrary to what you might expect, a world suffused with deepfakes and other artificially generated photographic images won’t be one in which “fake” images are routinely believed to be real, but one in which “real” images are routinely believed to be fake — simply because, in the wake of the Inversion, who’ll be able to tell the difference?

how much of the internet is fake
Only 4% of the Internet is indexed by Google

How Much of the Internet Is Fake? Our politics are fake.

Such a loss of any anchoring “reality” only makes us pine for it more. Our politics have been inverted along with everything else, suffused with a Gnostic sense that we’re being scammed and defrauded and lied to but that a “real truth” still lurks somewhere. Adolescents are deeply engaged by YouTube videos that promise to show the hard reality beneath the “scams” of feminism and diversity — a process they call “red-pilling” after the scene in The Matrix when the computer simulation falls away and reality appears. Political arguments now involve trading accusations of “virtue signaling” — the idea that liberals are faking their politics for social reward — against charges of being Russian bots. The only thing anyone can agree on is that everyone online is lying and fake.

We ourselves are fake.

Which, well. Everywhere I went online this year, I was asked to prove I’m a human. Can you retype this distorted word? Can you transcribe this house number? Can you select the images that contain a motorcycle? I found myself prostrate daily at the feet of robot bouncers, frantically showing off my highly developed pattern-matching skills — does a Vespa count as a motorcycle, even? — so I could get into nightclubs I’m not even sure I want to enter. Once inside, I was directed by dopamine-feedback loops to scroll well past any healthy point, manipulated by emotionally charged headlines and posts to click on things I didn’t care about, and harried and hectored and sweet-talked into arguments and purchases and relationships so algorithmically determined it was hard to describe them as real.

Where does that leave us? I’m not sure the solution is to seek out some pre-Inversion authenticity — to red-pill ourselves back to “reality.” What’s gone from the internet, after all, isn’t “truth,” but trust: the sense that the people and things we encounter are what they represent themselves to be. Years of metrics-driven growth, lucrative manipulative systems, and unregulated platform marketplaces have created an environment where it makes more sense to be fake online — to be disingenuous and cynical, to lie and cheat, to misrepresent and distort — than it does to be real. Fixing that would require cultural and political reform in Silicon Valley and around the world, but it’s our only choice. Otherwise, we’ll all end up on the bot internet of fake people, fake clicks, fake sites, and fake computers, where the only real thing is the ads.

A version of this article appeared in the December 24, 2018, issue of New York Magazine.

The impact of bots on opinions in social networks

The impact of bots on opinions in social networks. Social networks have given us the ability to spread messages and influence large populations very easily. Malicious actors can take advantage of social networks to manipulate opinions using artificial accounts, or bots. It is suspected that the 2016 U.S. presidential election was the victim of such social network interference, potentially by foreign actors. Foreign influence bots are also suspected of having attacked European elections. Multiple research studies confirm the bots main action was the sharing of politically polarized content in an effort to shift opinions.  The potential threat to election security from social networks has become a concern for governments around the world.

In the U.S., Members of Congress have not been satisfied with the response of major social networks and have asked them to take actions to prevent future interference in the U.S. democratic process by foreign actors. In response, major social media companies have taken serious steps. Facebook has identified several pages and accounts tied to foreign actors and Twitter suspended over 70 million bot accounts.

Despite all of the efforts taken to counter the threat posed by bots, one important question remains unanswered: how many people were impacted by these influence campaigns? More generally, how can we quantify the effect of bots on the opinions of users in a social network? Answering this question would allow one to assess the potential threat of an influence campaign. Also, it would allow one to test the efficacy of different responses to the threat. Studies have looked at the volume of content produced by bots and their social network reach during the 2016 election. However, this data alone does not indicate the effectiveness of the bots in shifting opinions.

The challenge is we do not know what would have happened if the bots had not been there. Such a counterfactual analysis is only possible if there is a model which can predict the opinions of users in the presence or absence of bots. For a model to be useful in assessing the impact of bots, it must be validated on real social network data. Once validated, an opinion model can then be used to assess the impact of different groups of bots.

The Impact of Bots on Opinions in Social Networks
Visualization of the network of Twitter users discussing the second 2016 presidential debate. Node sizes are proportional to their follower-count in the network and node colors indicate their tweet based opinion. Nodes favoring Trump are red and nodes favoring Clinton are blue.

A recent research report by the Massachusetts Institute of Technology (MIT) presented a method to quantify the impact of bots on the opinions of users in a social network. MIT focused the analysis on a network of Twitter users discussing the 2016 presidential election between Hillary Clinton and Donald Trump. The key strategy used was to find a model for opinion dynamics in a social network. Firstly, MIT validated the model by showing that the user opinions predicted by the model align with the opinions of these users’ based on their social media posts. Secondly, MIT identified bots in the network using a developed and customised algorithm. Thirdly, MIT used the opinion model to calculate how the opinions shift when they removed the bots from the network.

MIT discovered that a small number of bots have a disproportionate impact on the network opinions, and this impact is primarily due to their elevated activity levels. In the dataset, MIT found that the bots which supported Clinton caused a bigger shift in opinions than the bots which supported Trump, even though there are more Trump bots in the network.

The Digital Influence Machine

The Digital Influence Machine. In light of how the advertising capabilities of Facebook, Twitter, and other social networks have been used in recent political elections across the world. A new report, argues that today’s digital advertising infrastructure creates disturbing new opportunities for political manipulation and other forms of anti-democratic strategic communication. As ad platforms, web publishers, and other intermediaries have developed an infrastructure of data collection and targeting capacities that the report calls the Digital Influence Machine (DIM).

The DIM incorporates a set of overlapping technologies for surveillance, targeting, testing, and automated decision-making designed to make advertising – from the commercial to the political more powerful and efficient. The report claims the DIM can identify and target weak points where groups and individuals are most vulnerable to strategic influence and is a form of information warfare.

The Digital Influence Machine

The Digital Influence Machine. Unlike campaigns of even a decade ago, data-driven advertising allows political actors to zero in on those believed to be the most receptive and pivotal audiences for very specific messages while also helping to minimize the risk of political blowback by limiting their visibility to those who might react negatively.

The various technologies and entities of the Digital Influence Machine cohere around three interlocking communication capacities:

  • To use sprawling systems of consumer monitoring to develop detailed consumer profiles
  • To target customised audiences with strategic messaging across devices, channels, and contexts
  • To automate and optimise tactical elements of influence campaigns, leveraging consumer data and real-time feedback to test and tweak key variables including the composition of target publics and the timing, placement, and content of ad messages

The social influence of the DIM, like all technological systems, is also largely a product of the political, economic, and social context in which it developed. The report analysed three key shifts in the US media and political landscape that contextualise the use of the DIM to manipulate political activity:

  • The decline of professional journalism
  • The expansion of financial resources devoted to political influence
  • The growing sophistication of targeted political mobilization in a regulatory environment with little democratic accountability

The report documented three distinct strategies that political actors currently use to weaponise the DIM:

  • Mobilize supporters through identity threats
  • Divide an opponent’s coalition
  • Leverage influence techniques informed by behavioral science

Despite this range of techniques, weaponised political ad targeting will rarely, if ever, be effective in changing individuals’ deeply-held beliefs. Instead, the goals of weaponised DIM campaigns will be to amplify existing resentments and anxieties, raise the emotional stakes of particular issues or foreground some concerns at the expense of others, stir distrust among potential coalition partners, and subtly influence decisions about political behaviors (like whether to go vote or attend a protest). In close elections, if these tactics offer even marginal advantages, groups willing to engage in ethically dubious machinations may reap significant benefits.

The report suggested that key points of intervention for mitigating harms are the technical structures, institutional policies, and legal regulations of the DIM. One significant further step companies could take would be to categorically refuse to work with dark money groups. Platforms could also limit weaponisation by requiring explicit, non-coercive user consent for viewing any political ads that are part of a split-testing experiment. Future ethical guidelines for political advertising could be developed in collaboration with independent committees representing diverse communities and stakeholders. All of these possible steps have benefits, risks, and costs, and should be thoroughly and seriously considered by corporations, regulators, and civil society.

The report concluded that whatever the future of online ad regulation, the consideration of political ads will only be one component in a larger effort to combat disinformation and manipulation. Without values like fairness, justice, and human dignity guiding the development of the DIM and a commitment to transparency and accountability underlying its deployment, such systems are antithetical to the principles of democracy.



Mueller & Russian meddling – an inconvenient truth in the age of digital marketing

Mueller, Russian meddling and digital marketing

This fascinating and informative article is by the blogger Moon of Alabama.

“Last week the U.S. Justice Department indicted the Russian Internet Research Agency on some dubious legal grounds. It covers thirteen Russian people and three Russian legal entities. The main count of the indictment is an alleged “Conspiracy to Defraud the United States”.

The published indictment gives support to Moon of Alabama’s long-held belief that there was no “Russian influence” campaign during the U.S. election. What is described and denounced as such was instead a commercial marketing scheme which ran click-bait websites to generate advertisement revenue and created online crowds around virtual persona to promote whatever its commercial customers wanted to promote. The size of the operation was tiny when compared to the hundreds of millions in campaign expenditures. It had no influence on the election outcome.

The indictment is fodder for the public to prove that the Mueller investigation is “doing something”. It distracts from further questioning the origin of the Steele dossier. It is full of unproven assertions and assumptions. It is a sham in that none of the Russian persons or companies indicted will ever come in front of a U.S. court. That is bad because the indictment is built on the theory of a new crime which, unless a court throws it out, can be used to incriminate other people in other cases and might even apply to this blog. The latter part of this post will refer to that.

In the early 1990s, some dude in St.Petersburg made a good business selling hot dogs. He opened a colourful restaurant. Local celebrities and politicians were invited to gain notoriety while the restaurant served cheap food at too high prices. It was a good business. A few years later he moved to Moscow and gained contracts to cater to schools and to the military. The food he served was still substandard.

But catering bad food as school lunches gave him, by chance, the idea for a new business:

Parents were soon up in arms. Their children wouldn’t eat the food, saying it smelled rotten.
As the bad publicity mounted, Mr Prigozhin’s company, Concord Catering, launched a counterattack, a former colleague said. He hired young men and women to overwhelm the internet with comments and blog posts praising the food and dismissing the parents’ protests.

“In five minutes, pages were drowning in comments,” said Andrei Ilin, whose website serves as a discussion board about public schools. “And all the trolls were supporting Concord.”

The trick worked beyond expectations. Prigozhin had found a new business. He hired some IT staff and low paid temps to populate various message boards, social networks and the general internet with whatever his customers asked him for.

Have you a bad online reputation? Prigozhin can help. His internet company will fill the net with positive stories and remarks about you. Your old and bad reputation will be drowned by the new and good one. Want to promote a product or service? Prigozhin’s online marketeers can address the right crowds.


To achieve those results the few temps who worked on such projects needed to multiply their online personalities. It is better to have fifty people vouch for you online than just five. No one cares if these are real people or just virtual ones. The internet makes it easy to create such sock-puppets. The virtual crowd can then be used to push personalities, products or political opinions. Such schemes are nothing new or special. Every decent “western” public relations and marketing company will offer a similar service and has done so for years.

While it is relatively easy to have sock-puppets swamp the comment threads of such sites as this blog, it is more difficult to have a real effect on social networks. These depend on multiplier effects. To gain many real “likes”, “re-tweets” or “followers” an online persona needs a certain history and reputation. Real people need to feel attached to it. It takes some time and effort to build such a multiplier personality, be it real or virtual.

At some point, Prigozhin, or whoever by then owned the internet marketing company, decided to expand into the lucrative English speaking market. This would require to build many English language online persona and to give those some history and time to gain crowds of followers and a credible reputation. The company sent a few of its staff to the U.S. to gain some impressions, pictures and experience of the surroundings. They would later use these to impersonate as U.S. locals. It was a medium size, long-term investment of maybe a hundred-thousand bucks over two or three years.

The U.S. election provided an excellent environment to build reputable online persona with large followings of people with discriminable mindsets. The political affinity was not important. The personalities only had to be very engaged and stick to their issue – be it left or right or whatever. The sole point was to gain as many followers as possible who could be segmented along social-political lines and marketed to the companies customers.

Again – there is nothing new to this. It is something hundreds, if not thousands of companies are doing as their daily business. The Russian company hoped to enter the business with a cost advantage. Even its mid-ranking managers were paid as little as $1,200 per month. The students and other temporary workers who would ‘work’ the virtual personas as puppeteers would earn even less. Any U.S. company in a similar business would have higher costs.

In parallel to building virtual online persona the company also built some click-bait websites and groups and promoted these through mini Facebook advertisements. These were the “Russian influence ads” on Facebook the U.S. media were so enraged about. They included the promotion of a Facebook page about cute puppies. Back in October, we described how those “Russian influence” ads (most of which were shown after the election or were not seen at all) were simply part of a commercial scheme:

The pages described and the ads leading to them are typical click-bait, not part of a political influence op.

One builds pages with “hot” stuff that hopefully attracts lots of viewers. One creates ad-space on these pages and fills it with Google ads. One attracts viewers and promotes the spiked pages by buying $3 Facebook mini-ads for them. The mini-ads are targeted at the most susceptible groups.
A few thousand users will come and look at such pages. Some will ‘like’ the puppy pictures or the rant for or against LGBT and further spread them. Some will click the Google ads. Money then flows into the pockets of the page creator. One can rinse and repeat this scheme forever. Each such page is a small effort for a small revenue. But the scheme is highly scalable and parts of it can be automatized.

Because of the myriad of U.S. sanctions against Russia, the monetization of these business schemes required some creativity. One can easily find the name of a real U.S. person together with the assigned social security number and its date of birth. Those data are enough to open, for example, a Paypal account under a U.S. name. A U.S. customer of the cloaked Russian Internet company could then pay to the Paypal account and the money could be transferred from there to Moscow. These accounts could also be used to buy advertising on Facebook. The person whose data was used to create the account would never learn of it and would have no loss or other damage. Another scheme is to simply pay some U.S. person to open a U.S. bank account and to then hand over the ‘keys’ to that account.

The Justice Department indictment is quite long and detailed. It must have been expensive. If you read it do so with the above in mind. Skip over the assumptions and claims of political interference and digest only the facts. All that is left is, as explained, a commercial marketing scheme.

I will not go into all its detail of the indictment but here are some points that support the above description.

Point 4:

Defendants, posing as US. persons and creating false U.S. personas, operated social media pages and groups designed to attract U.S. audiences. These groups and pages, which addressed divisive US. political and social issues, falsely claimed to be controlled by US. activists when, in fact, they were controlled by Defendants. Defendants also used the stolen identities of real U.S. persons to post on social media accounts. Over time, these social media accounts became Defendants’ means to reach significant numbers of Americans …
Point 10d:

By in or around April 2014, the ORGANIZATION formed a department that went by various names but was at times referred to as the “translator project.” This project focused on the US. population and conducted operations on social media platforms such as YouTube, Facebook, Instagram, and Twitter. By approximately July 2016, more than eighty ORGANIZATION employees were assigned to the translator project.
(Some U.S. media today made the false claim that $1.25 million per month spent by the company for its U.S. campaign. But Point 11 of the indictment says that the company ran a number of such projects directed at a Russian audience while only the one described in 10d above is aimed at a U.S. audience. All these projects together had a monthly budget of $1.25 million.)

(Point 17, 18 and 19 indict individual persons who have worked for the “translator” project” “to at least in and around [some month] 2014”. It is completely unclear how these persons, who seem to have left the company two years before the U.S. election, are supposed to have anything to do with the claimed “Russian influence” on the U.S. election and the indictment.)

Point 32:

Defendants and their co-conspirators, through fraud and deceit, created hundreds of social media accounts and used them to develop certain fictitious U.S. personas into “leader[s] of public opinion” in the United States.
The indictment then goes on and on describing the “political activities” of the sock-puppet personas. Some posted pro-Hillary slogans, some anti-Hillary stuff, some were pro-Trump, some anti-everyone, some urged not to vote, others to vote for third party candidates. The sock-puppets did not create or post fake news. They posted mainstream media stories.

Some of the personas called for going to anti-Islam rallies while others promoted pro-Islam rallies. The Mueller indictment lists a total of eight rallies. Most of these did not take place at all. No one joined the “Miners For Trump” rallies in Philly and Pittsburgh. A “Charlotte against Trump” march on November 19 – after the election – was attended by one hundred people. Eight people came for a pro-Trump rally in Fort Myers.

The sock-puppets called for rallies to establish themselves as ‘activist’ and ‘leadership’ persona, to generate more online traffic and additional followers. There was, in fact, no overall political trend in what the sock-puppets did. The sole point of all such activities was to create a large total following by having multiple personas which together covered all potential social-political strata.

At Point 86 the indictment turns to Count Two – “Conspiracy to Commit Wire Fraud and Bank Fraud”. The puppeteers opened, as explained above, various Paypal accounts using ‘borrowed’ data.

Then comes the point which confirms the commercial marketing story as laid out above:

Point 95:

Defendants and their co-conspirators also used the accounts to receive money from real U.S. persons in exchange for posting promotions and advertisements on the ORGANIZATION-controlled social media pages. Defendants and their co-conspirators typically charged certain U.S. merchants and U.S. social media sites between 25 and 50 U.S. dollars per post for promotional content on their popular false U.S. persona accounts, including Being Patriotic, Defend the 2nd, and Blacktivist.
There you have it. There was no political point to what the Russian company did. Whatever political slogans one of the company’s sock-puppets posted had only one aim: to increase the number of followers for that sock-puppet. The sole point of creating a diverse army of sock-puppets with large following crowds was to sell the ‘eyeballs’ of the followers to the paying customers of the marketing company.

There were, according to the indictment, eighty people working on the “translator project”. These controlled “hundreds” of sock-puppets online accounts each with a distinct “political” personality. Each of these sock-puppets had a large number of followers – in total several hundred-thousands. Now let’s assume that one to five promotional posts can be sold per day on each of the sock-puppets content streams. The scheme generates several thousand dollars per day ($25 per promo, hundreds of sock-puppets, 1-5 promos per day per sock-puppet). The costs for this were limited to the wages of up to eighty persons in Moscow, many of the temps, of which the highest paid received some $1,000 per month. While the upfront multiyear investment to create and establish the virtual personas was probably significant, this likely was, overall, a profitable business.

Again – this had nothing to do with political influence on the election. The sole point of political posts was to create ‘engagement‘ and a larger number of followers in each potential social-political segment. People who buy promotional posts want these to be targeted at a specific audience. The Russian company could offer whatever audience was needed. It had sock-puppets with a pro-LGBT view and a large following and sock-puppets with anti-LGBT views and a large following. It could provide pro-2nd amendment crowds as well as Jill Stein followers. Each of the sock-puppets had over time generated a group of followers that were like-minded. The entity buying the promotion simply had to choose which group it preferred to address.

The panic of the U.S. establishment over the loss of their preferred candidate created an artificial storm over “Russian influence” and assumed “collusion” with the Trump campaign. (Certain Democrats though, like Adam Schiff, profit from creating a new Cold War through their sponsoring armament companies.)

The Mueller investigation found no “collusion” between anything Russia and the Trump campaign. The indictment does not mention any. The whole “Russian influence” storm is based on a misunderstanding of commercial activities of a Russian marketing company in U.S. social networks.

There is a danger in this. The indictment sets up a new theory of nefarious foreign influence that could be applied to even this blog. As U.S. lawyer Robert Barns explains:

The only thing frightening about this indictment is the dangerous and dumb precedent it could set: foreign nationals criminally prohibited from public expression in the US during elections unless registered as foreign agents and reporting their expenditures to the FEC.

Mueller’s new crime only requires 3 elements: 1) a foreign national; 2) outspoken on US social media during US election, and 3) failed to register as a foreign agent or failed to report receipts/expenditures of speech activity. Could indict millions under that theory.

The legal theory of the indictment for most of the defendants and most of the charges alleges that the “fraud” was simply not registering as a foreign agent or not reporting expenses to the FEC because they were a foreign national expressing views in a US election.
Author Leonid Bershidsky, who writes for Bloomberg, remarks:

“I’m actually surprised I haven’t been indicted. I’m Russian, I was in the U.S. in 2016 and I published columns critical of both Clinton and Trump w/o registering as a foreign agent.”

As most of you will know your author writing this is German. I write pseudo-anonymously for a mostly U.S. audience. My postings are political and during the U.S. election campaign expressed an anti-Hillary view. The blog is hosted on U.S, infrastructure paid for by me. I am not registered as Foreign Agent or with the Federal Election Commission.

Under the theory on which the indictment is based I could also be indicted for a similar “Conspiracy to Defraud the United States”.

(Are those of you who kindly donated to this blog co-conspirators?)

When Yevgeni Prigozhin, the hot dog caterer who allegedly owns the internet promotion business, was asked about the indictment he responded:

“The Americans are really impressionable people, they see what they want to see. […] If they want to see the devil, let them see him.”

What Andy Black can tell you about succeeding in a Digital Economy – Interview with Amina Maikori

What Andy Black can tell you about succeeding in a Digital Economy

What Andy Black can tell you about succeeding in a Digital Economy

He is a Director of Andy Black Associates, a London based Digital Media firm. He began his career in Film, Television and Theatre before making the switch from traditional analogue media to digital media-that’s close to an impressive thirty years ago! Andy’s message on his website is a constant reminder to visitors that having digital presence is profitable for all businesses:

‘Are you ready for the Digital Economy?’ It says.

Those who have attended Andy’s training courses know that he is very practical in his teaching methods with great insights on ways to manage a fast growing numbers of digital channels. Andy has a process: he tries and then tests the latest apps and digital platforms before introducing them to you.

The digital economy is huge. Think Konga, think Dealdey don’t forget Amazon or eBay. Part of world globalization includes the luxury of getting across to people, opportunities and products regardless of distance, language , time or even business type.

Here’s an interview I did of Andy about three weeks ago. He tells you just how relevant Digital Media is to you and how you can own it.

Amina: Thanks for agreeing to do this interview. Could  you start off by telling me a little bit about yourself?

Andy: I am Andy Black, a 50 something digital consultant, I have been running my own digital consultancy for 3 years and have been working in the technology sector for over 25 years.

In the 1970’s I was a pupil at Emanuel School in London where my contemporaries included Sir Tim Berners-Lee, the creator of the web, Sir Sebastian Wood, UK Ambassador in Germany and Matthew Taylor, Chief Executive of the Royal Society for the encouragement of Arts, Manufactures and Commerce (RSA).

In the early 1980’s I was a student at the Bristol Old Vic Theatre School where I received practical training in film, TV, radio and acting. My contemporaries at Bristol included Daniel Day-Lewis, Miranda Richardson and Samantha Bond – in this sort of company I soon realised my limitations and became an expert in spear carrying.

I worked professionally in film, TV and theatre for 2 years before joining a Soho video production company in 1987 that was launching the first analogue to digital film tech – that was 30 years ago!

Since then I have worked in data analysis, information services, search software, intelligence gathering, digital marketing & content creation. I am divorced, happily single and have a 28 year old son who is getting married next year. I look forward to being a digital granddad.

What Andy Black can tell you about succeeding in a Digital Economy

Andy (left) worked in film, TV and theatre for 2 years – here appearing as Oberon in a 1983 production of Shakespeare’s A Midsummer Nights Dream at the Bristol Old Vic with Lisa Bowerman as Titania and Tony Howes as Puck 

Amina: Digital grandad! That would an interesting title, definitely. When and why did you make the transition from traditional to digital media?

Andy: My transition from traditional analogue media to digital media occurred in 1987 when I started working for TeleTape Video Ltd. They introduced the first analogue to digital video display technology to the UK, and I joined a team of 4 young edgy techie creatives who started to play with and evolve commercial services with the new technology. Lots of late nights, laughter, hard work and busy weekends.

I became a digital obsessive and tried out things like subliminal messaging and building digital sculptures with monitors that displayed video & information. We were involved in lots of interesting projects including the launch of Sky TV, video displays at the Conservative Party conference and lots of air and defence trade shows. I will always remember working on the the launch of SkyTV at the National Theatre, the highlight was Rupert Murdoch slowly walking through a swirling sea of dry ice engulfing two of our huge videowall sculptures as he launched Sky TV to the assembled global media – you can imagine the pressure on me in the control room!!

In 1990 I was headhunted to join Perfect Information a City start-up, where digital was used to scan original company documents and newspaper cuttings to create a unique image based real-time information service for City clients such as Goldman Sachs, Cazenove and Kroll Associates – I learnt on the job about data management, ISDN, metadata, information, RAID, internet, broadband, cloud computing, telecoms, optical storage – as well as how the City and M&A teams operate.

In 1996 I joined Excalibur Technologies, a US based advanced search software company, where I worked on projects including web crawling for Factiva, advanced search software for ProQuest and the Excalibur rapid rebuttal database for the Labour Party. In many ways Twitter and automated bots have now democratised rapid rebuttal. Unfortunately it has also led to memes, fake news and algorithmic manipulation being used as a type of information warfare to distort traditional news flows and disrupt public opinion. It is fascinating to watch the analogue to digital revolution.

Amina: It must have been exciting to be part of that revolution. What do you find is the major difference between the two?

Andy: A digital file is cheap, made once and can be easily stored, copied and also shared an infinite number of times. A printed book is expensive to print and also difficult to share or store. The economics of digital totally disrupts any sector it touches. Every business needs a digital transformation strategy otherwise they risk being Blockbuster when their customers want Netflix.

Amina: For a lot of people, digital or social media is what they do on the go with no specific time scheduled for it. Your case obviously is different, perhaps with more structure. What is a typical day like for you?

Andy: I am connected 24/7 and regularly monitor Twitter for news, Facebook for news from friends, LinkedIn for news from connections, Twitter Lists for expert news and Google Custom Search for key website content for projects i am working on. I also use extensive Boolean search operators and scripts to retrieve deep web information that is not indexed by Google. When not working at a client site or on a specific project, my typical day is as follows:

At 08.00 am I normally start by checking Twitter for trends and news – I then curate interesting stories regarding the digital economy and use scheduling tools so my tweets appear at the optimum time for my followers, which is between 1pm-4pm – I normally send 5 tweets and 1 LinkedIn share a day. I use Twitter saved searches, Twitter Lists, Google Custom Search and Hootsuite to make this fast and efficient.

After this I monitor trending topics and hashtags to see if I can “newsjack” a relevant trend and share a link to my website – this is a very effective tactic for growing followers and increasing traffic to my website. I normally complete this by 10.00am.

What Andy Black can tell you about succeeding in a Digital Economy

More web pages are now viewed on a mobile than a PC – is your content & website mobile friendly?

Then I login to my website, check emails from website visitors, check my SEO, Google Analytics, Adwords and Woorank to make sure my pages and ads are all functioning. A key daily task is monitoring for any changes in the Google, Facebook and Twitter algorithms, these three companies are now the gatekeepers for news and content and any changes they make can have a dramatic effect on content marketing and digital campaigns. I finish this by 10.30.

From 10.30am to 12.00 i do my admin, other business emails, proposals, Skype calls with my associates. In the afternoons I attend meetings or go to the Frontline Club to work.

In the evening I normally do 1-2 hours reading, OSINT deep web research or try out new software/apps. Google only indexes 5% of the Internet so an understanding of information resources on the deep web is absolutely vital, otherwise you may make “fake decisions”.

Amina: The digital sphere is flooded with all kinds of apps and social media channels, if you’re an outsider it’s a bit hard to decide on which one to embrace or ignore. Which 5 platforms would you say are an absolute must for organizations or businesses and why?

Andy: Whilst there are regional and demographic differences, I think the current 5 key platforms are;

  • Facebook (Page, Live, analytics, ads, Messenger)
  • Twitter (ads, analytics, Periscope, lists, geo-location search, advanced search)
  • LinkedIn (ads, SlideShare, posts, advanced search – and soon Skype)
  • Hootsuite (social media management/engagement, Hootlet, apps, scheduling)
  • Website (SEO, mobile responsive, AdWords, blog, YouTube, navigation, ecommerce, Skype)

Your website should be the hub, with social channels linking to it.

Amina: Let’s take a look at the digital economy. I notice it’s the first thing that pops up on your page. More specifically, we see the question ‘ Are you ready for the digital economy?’ Why is that such an important thing?

Andy: Digital technology is reshaping traditional industry, especially those sectors that rely on direct engagement with consumers (for example, marketing, PR and design) and technological innovation (for example. science and high tech). Education, however, is the sector with the lowest proportion of digital businesses.

What Andy Black can tell you about succeeding in a Digital Economy

Countries like India, Nigeria, Brazil are using digital and mobile to transform their economies.

Digital is ubiquitous. Mobile devices are everywhere and countries like India, Nigeria, Brazil are using digital and mobile to transform their economies. This represents huge opportunities for collaboration, trade and knowledge sharing, organisations that fail to grasp these opportunities will go out of business .

Amina: Finally, what do businesses and organizations need to do to get ready for the digital economy?

Andy: They need to move away from hierarchical structures to self-organising networks.

what andy black can tell you about succeeeding in the digital economy

What Andy Black can tell you about succeeding in a Digital Economy

Move from hierarchical structures to self-organising networks.

Take a look at how the Labour Party used crowdfunding, crowdsourcing bots and AI in the 2017 UK General Election!

If you want to know more about the Digital Economy follow  Andy Black Associates on Twitter ‪@AndyBlacz ‬.

You can also access their free Advanced digital toolkit here.

Finally , check out how sales work in the old days versus now. Yes, just look at that for a moment. Or two.What Andy Black can tell you about succeeding in a Digital Economy

This interview originally appeared on Amina Maikori’s blog.

US Election is a battle between social media and mainstream media

As the U.S. presidential election reaches a critical phase, more than half of Americans are saying that the campaign is a very significant source of stress. The election related stress also seems to affect various generations of Americans differently, including those who consume news via social media versus those who use traditional mainstream media. The US Election is a battle between social media and mainstream media.

The American Psychological Association (APA) has made available specific data related to stress levels associated with the presidential election. According to the APA, social media appears to affect Americans’ stress levels when it comes to the election and related topics. More than one in four adults (38 percent) say that political and cultural discussions on social media cause them stress. Additionally, adults who use social media are more likely than adults who do not use social media to say the election is a very or somewhat significant source of stress (54 percent vs. 45 percent, respectively).

With mainstream media focusing on the Donald Trump “groping allegations” and social media focusing on the Wikileaks “Crooked Hillary” coverage, significant cognitive bias is being amplified and accelerated.

US Election is a battle between social media and mainstream media

Google Trends indicate that U.S. and worldwide Google searches for “Wikileaks and Clinton”  far exceed those for “Trump Sexual Allegations”.

US Election is a battle between social media and mainstream media

But mainstream media continues to focus on the Trump allegations and has very little coverage of the Clinton Wikileaks disclosures.

US Election is a battle between social media and mainstream media.

Trump and Clinton are both suffering from huge reputational damage that may well poison the eventual winners presidency. US electors are also suffering from cognitive dissonance and stress as a result of the schizophrenic news coverage.

While around half of adults, regardless of generation, report that the election is a very significant source of stress, youngest and oldest generations appear more likely to be affected, with 56 percent of Millennials (ages 19 to 37) and 59 percent of Matures (ages 71+) saying the election is a very significant source of stress. This is significantly more than the 45 percent of Gen Xers (ages 38 to 51), and directionally more so than the 50 percent of Boomers (ages 52 to 70).

It is likely that the battle between social media and mainstream media will get even more brutal in the weeks leading up to the election on November 8th. On the morning of November 9th, Americans may wake up divided and traumatised realising that no one has won.