Disinformation for hire: How a new type of PR firm sells lies online

Disinformation for hire – This article originally appeared in Buzzfeed News on 7th January 2020

Peng Kuan Chin pulled out his phone, eager to show the future of online manipulation and disinformation.

Unseen servers began crawling the web for Chinese articles and posts. The system quickly reorganized the words and sentences into new text. His screen displayed a rapidly increasing tally of the articles generated by his product, which he dubs the “Content Farm Automatic Collection System.”

With the articles in hand, a set of websites that Peng controlled published them, and his thousands of fake social media accounts spread them across the internet, instantly sending manipulated content into news feeds, messaging app inboxes, and search results.

“I developed this for manipulating public opinion,” Peng told the Reporter, an investigative news site in Taipei, which partnered with BuzzFeed News for this article. He added that automation and artificial intelligence “can quickly generate traffic and publicity much faster than people.”

The 32-year-old wore Adidas Yeezy sneakers and a gold Rolex as he sat in a two-story office in the industrial part of Taichung that was filled with feng shui items such as a money frog and lucky bamboo. A riot gun, which uses compressed air to fire nonlethal projectiles, rested on his desk. Peng said he bought it for “recreational purposes.”

In the interview, he detailed his path from sending spam emails as a 14-year-old to, being recruited to help with the 2018 reelection campaign of Najib Razak, the former prime minister of Malaysia.

Peng’s clients are companies, brands, political parties, and candidates in Asia. “Customers have money, and I don’t care what they buy,” he said. They’re purchasing an end-to-end online manipulation system, which can influence people on a massive scale — resulting in votes cast, products sold, and perceptions changed.

Peng’s product is modeled on automation software he saw in China, which he believes no one else outside the mainland has. But while his technology may be unique, his company, Bravo-Idea, is not. There is now a worldwide industry of PR and marketing firms ready to deploy fake accounts, disinformation, false narratives, and pseudo news websites for the right price.

If disinformation in 2016 was characterized by Macedonian spammers pushing pro-Trump fake news and Russian trolls running rampant on platforms, 2020 is shaping up to be the year communications pros for hire provide sophisticated online propaganda and disinformation operations to anyone willing to pay. Around the globe, politicians, parties, governments, and other clients hire what is known in the industry as “black PR” firms to spread lies, disinformation and manipulate online discourse.

A BuzzFeed News review — which looked at account takedowns by platforms that deactivated and investigations by security and research firms — found that since 2011, at least 27 online information operations have been partially or wholly attributed to PR or marketing firms. Of those, 19 occurred in 2019 alone.

Most recently, in late December, Twitter announced it removed more than 5,000 accounts that it said were part of “a significant state-backed disinformation operation” in Saudi Arabia carried out by marketing firm Smaat. The same day, Facebook announced a takedown of hundreds of accounts, pages, and groups that it found were engaged in “foreign and government interference” on behalf of the government of Georgia. It attributed the operation to Panda, an advertising agency in Georgia, and to the country’s ruling party.

Nathaniel Gleicher, Facebook’s head of cybersecurity policy, told BuzzFeed News “the professionalization of deception and disinformation” is a growing threat.

“The broader notion of deception, disinformation and influence operations has been around for some time, but over the past several years, we have seen […] companies grow up that basically build their business model around deception and disinformation,” he said.

Although Peng may be one of the most sophisticated black PR practitioners, he is far from the only one. The Saudi and Georgian revelations followed a drumbeat of similar takedowns of and investigations into marketing and PR firms in countries such as Israel, Egypt, the United Arab Emirates, Ukraine, Brazil, Indonesia, and Poland.

Cindy Otis, a former CIA officer and the author of True or False: A CIA Analyst’s Guide to Spotting Fake News, told BuzzFeed News that information operations by nation-states like Russia and Iran have provided “a playbook for individuals and groups that are financially motivated to delve into this space.”

The emergence of black PR firms means investigators at platforms, security firms, and within the intelligence community are “spending increasing amounts of time looking at the disinformation-for-hire services that are out there,” said Otis. Researchers estimate as much as 50% of the Internet is fake.

The Archimedes Group, an Israeli black PR firm, created networks of hundreds of Facebook pages, accounts, and groups around the world, boasting on its website that it would “use every tool and take every advantage available in order to change reality according to our client’s wishes.” For an election in Mali, it managed a fake fact-checking page that claimed to be run by local students. In Tunisia, it ran a page titled “Stop à la Désinformation et aux Mensonges” (“Stop Disinformation and Lies”). In Nigeria, it ran pages advocating for and against the same politician, former vice president Atiku Abubakar. Researchers postulated that the pro-Abubakar page “was likely designed to identify his supporters in order to target them with anti-Abubakar content later.”

In Ukraine, the PR firm Pragmatico employed dozens of young, digitally savvy people to pump out positive comments on fake Facebook accounts about clients. In Poland, Cat@Net managed networks of fake Twitter accounts operated by staffers with disabilities working from home, whom the agency hired because it could pay them below-market rates while they received government subsidies. Reporting by Investigate Europe also found Cat@Net performed work for one of Poland’s most prominent PR agencies, Art-Media. (The company denied working with Cat@Net.)

In Puerto Rico, journalists revealed that former governor Ricardo Rosselló was an administrator of a Telegram group chat where a consultant from marketing firm KOI appeared to plan and direct social media campaigns to push pro-government messages and attack rivals. In August, Rosselló resigned, in part over widespread outrage over the chats.

disinformation for hire - andy black associates blog
Peng Kuan Chin

Jameson Wu / The Reporter


Peng’s career is a road map of how online manipulation and disinformation services evolved from solo operations to agencies that openly advertise their services and employ large staffs.

At 14 years old, he wrote an email spam program to stuff mailboxes of people in Taiwan. “Using 30 computers I had in a room, I became a very big player in sending spam,” Peng said. “I think 1 of every 2 people in Taiwan has received junk mails that I was responsible for.”

In high school, he created software to spam popular internet message boards with offers, labeling the product as an “Automatic Bulletin Posting Kit.” One ad asked visitors to porn sites, “Do you know that excessive masturbation can cause impotence and premature ejaculation?” Peng said the fearmongering ad helped drive sales for male enhancement pills he was promoting.

Peng created thousands of fake accounts on popular Chinese message boards to promote his and his clients’ products. He soon began making and selling websites and counseling via Skype on how to make money online.

“I hosted so many that I even lost my voice! I was a high school student then, and my mother was wondering what I was doing speaking on the phone all day,” he said.

In 2011, singers from schools across Taiwan competed in a popular TV competition. Peng’s alma mater won; its performer received more than 41 million online votes, almost twice the Taiwanese population. A school official confirmed to the Reporter that the school had requested help from Peng in the competition but declined to comment further.

Since 2013, Peng has been developing his “Content Farm Automatic Collection System.” His clients use his system to overwhelm their chosen corners of the internet with torrents of AI-generated text that influence search results. Peng perfected the system by buying the services of every social media and SEO manipulation offering he could find on Taobao, a huge Chinese e-commerce site owned by Alibaba.

“I was scammed a few times in the beginning because I didn’t really understand the software,” he said. “Many of them were fake or useless.”

Peng instructed his six developers to build a system inspired by the best of what he saw.

“This marketing logic is in response to China’s huge population of 1.4 billion people, [where content] only gets eyeballs when there is volume,” he said. “In comparison, there are only 23 million people in Taiwan. Applying this logic, I will create the largest volume in the shortest amount of time, and the information I spread will reach everyone’s eyes.”

While Peng focuses on automation, black PR firms elsewhere rely on manual labor, using brute force with what he does with code.

For investigative reporter Vasil Bidun, that meant an eight-hour shift in the trendy Podil neighborhood of Kyiv. He would log on to different fake Facebook accounts to comment in favor of candidates, criticize their opponents, or steer conversations in specific directions. Ukraine’s presidential election was underway, and he said his employer, Pragmatico, seemed to have secured contracts with several people running for office. (All politicians asked about the troll farm have denied involvement.)

“The aim is to get an emotional reaction from a person,” Bidun said in an interview. “If they read a comment, even [if they understand] that it was written by a bot, it could have affected them emotionally and it becomes more difficult for them to control themselves.”

“Cat@Net said it was a PR company, but in reality it was [a] troll farm.”

But Bidun wasn’t just punching the clock at the agency. After three months working undercover at the agency, he published an in-depth investigation. He and roughly 50 other Pragmatico employees worked in a single apartment, rotating in three shifts. It was mostly students trying to earn extra money, just over $300 per month, he said. No one talked about the ethics of the work — they just did what they were told, promoting the candidacies of both conservative and progressive politicians, including popular musician Svyatoslav Vakarchuk.

“It was the summer break and it was a method to earn a bit of money,” he said. “Most don’t think much about the consequences their work can have; they just write.”

Two days before Bidun’s investigation was published in September 2019, Facebook announced the removal of the firm’s assets on the platform, which amounted to 168 accounts, 149 pages, and 79 groups. The social media giant also revealed that Pragmatico had spent $1.6 million on ads, a significant sum for the Ukrainian market.

But black PR continues to flourish on social media in other parts of Eastern Europe. This year, while reporting for Investigate Europe, Katarzyna Pruszkiewicz spent six months undercover working for Cat@Net, a Polish company that describes itself as an “ePR agency comprising specialists who build a positive image of companies, private individuals and public institutions — mostly in social media.”

“Cat@Net said it was a PR company, but in reality it was [a] troll farm. They did fake accounts and disinformation,” she told BuzzFeed News. (The company denied it was a troll farm in a statement posted to its website.)

Pruszkiewicz said she and her colleagues used fake Twitter and Facebook accounts to deliver work for the firms’ clients. This meant promoting Polish state media, pumping up the left-wing politicians who hired them, or attacking the government’s decision to place an order for American F-35 fighter jets.

Cat@Net’s staffers worked remotely, congregating in Slack to receive their assignments. Sometimes a professional copywriter would provide them content, but many times it was up to employees to come up with messages for the fake accounts. Team members would celebrate each other’s successes, such as “when someone important like a politician answered a comment from the fake accounts,” Pruszkiewicz said.

Cat@Net focused on hiring people with disabilities because they could be paid less and qualified for government subsidies, according to Pruszkiewicz.

“They are in a wheelchair and have bills to pay. They are often without professional skills, and Cat@Net gave them work and got from the state a lot of money for [employing] these people,” she said.

After her reporting was published in October 2019 in Newsweek Poland, the Polish government opened an investigation into the company for the disability benefits it received. However, Twitter accounts run by Cat@Net are still online, Pruszkiewicz said.

“The fake accounts still exist today and are writing on Twitter like nothing happened, and every day I can see what they are writing on Twitter and Facebook. It’s really frustrating, because I spent six months [investigating] and the company still exists,” she said.

Nowhere is the rise of black PR firms more intertwined with marketing and politics than in the Philippines. Many legitimate-seeming agencies here offer black PR services that include fake social media accounts, websites, and coordinated harassment campaigns.

This year, Facebook announced takedowns of properties attributed to Twinmark Media Enterprises, a digital marketing company in the Philippines, and Nic Gabunada, the social media director for Filipino President Rodrigo Duterte’s 2016 campaign. In both cases, Facebook said the operations were engaged in “coordinated inauthentic activity.”

Gabunada previously insisted in an interview with BuzzFeed News that the Facebook engagement for Duterte’s campaign had been “organic” and “volunteer-driven.”

As previously reported by BuzzFeed News, some politicians who had decried Duterte’s use of fake Facebook accounts and trolling in 2016 had used social media manipulation services in their own campaigns.

Black PR services have become so lucrative in the Philippines that many PR firms feel pressured to offer them. One agency director told BuzzFeed News it’s “tempting” to offer black PR services because of the potential profit. It’s difficult to compete against companies who deliver these services, she added.

“We’ve had several campaigns where we were up against other firms that were willing to employ any kind of tactic to combat whatever we were putting out there if we were promoting a candidate and they were promoting an opposing candidate, for instance,” said the agency director, who asked not to be named in order to speak freely about the industry.

“The Philippines offers a cautionary tale for other countries.”

Jonathan Corpus Ong, an associate professor of global digital media at the University of Massachusetts Amherst, has been studying black PR firms and trolling in the Philippines for years. “The Philippines offers a cautionary tale for other countries for what happens when disinformation production within the PR industry has become so financially lucrative that they have moved from shady black market transactions to the professional respectability of the corporate boardroom,” he told BuzzFeed News.

Ong said PR firms use industry jargon while communicating with clients to help “neutralize the stigma of the real disinformation work that they do.”

“For instance, they would use the terms ‘supplemental pages’ and ‘digital support workers’ to describe what is otherwise known as ‘disinformation fake news sites’ or ‘paid trolls’ when they pitch their services to prospective clients. This lends an aura of respectability to the transaction and — crucially — gives politicians a level of plausible deniability,” he said.

The rise of black PR firms is on the radar of the global PR industry, which has long battled problems of its own making. In 2017, the industry took a stand against social media manipulation when the Public Relations and Communications Association expelled Bell Pottinger, a now-defunct London-based PR firm, after investigating its work in South Africa, where the firm stoked racial tensions in service of a billionaire client. Bell Pottinger previously received a $500 million contract from the Pentagon to execute a top secret propaganda program in Iraq, according to the Bureau of Investigative Journalism.

Global agencies often look to industry rules. Jill Tannenbaum, the chief communications and marketing officer for PR giant Weber Shandwick, told BuzzFeed News that the company must “engage audiences with campaigns that are rooted in truth,” even when it “compete[s] in markets where dishonest tactics take place.”

“We have a process in place to assess any client engagements that might not adhere to our values or include tactics that are not truthful or transparent, so we can counsel our teams and our clients accordingly,” she said in a statement. “Our local leaders – in the Philippines and around the world – are empowered to turn away work that is of concern or that does not adhere to our values.”

In the wake of the Bell Pottinger expulsion, the International Communications Consultancy Organisation, an umbrella organization representing PR trade groups around the world, established 10 principles known as the Helsinki Declaration. They require communication professionals to “be aware of the power of social media, and use it responsibly” and to “never engage in the creation of or knowingly circulate fake news.”

Francis Ingham, director general of the PRCA and chief executive of the ICCO, told BuzzFeed News that black PR firms give ethical practitioners a bad name.

“Our members are furious that they are ever tainted with the stain of these people who operate outside of [the industry’s] ethical parameters,” he said.

In spite of the increasing number of information operations being attributed to PR or marketing firms, Ingham said, these firms are the exception.

“I recognize there will always be a tiny percentage of people who call themselves PR or marketing practitioners who operate in the gray or black area,” he said.

While the legitimate PR industry works to differentiate itself from these practitioners, platforms are finding it increasingly difficult to prune black PR from their ecosystems.

“If the company is working on multiple platforms and has a wide range of business interests, we might not be able to completely destroy them,” said Facebook’s Gleicher.

He said Facebook’s approach is to remove assets involved in a specific operation and ban the entire organization. In some cases, Facebook also bans key employees from the platform.

“The reason we do that is making it very clear that it’s not going to be a profitable business model on our platform,” Gleicher said. “You build a business around this, we will remove you.”

Peng, however, is undeterred. He said it’s easy to evade Facebook’s controls, and that demand for his services remains strong.

“I think cracking Facebook is quite easy. My software is developed to constantly fight against Facebook,” he said. “This is done because there are markets, customers, and needs, and people have money to pay for the service. We do it because there is a demand.” ●


Najib Razak was defeated in the 2018 Malaysian elections. This article misstated that the former prime minister resigned.


Facebook removed accounts and other assets linked to Pragmatico before Vasil Bidun’s investigation was published. This article misstated that the removals occurred after his report.

How much of the Internet is fake?

How Much of the Internet Is Fake?  Max Read from New York Magazine offers a fascinating analysis.

In late November, the Justice Department unsealed indictments against eight people accused of fleecing advertisers of $36 million in two of the largest digital ad-fraud operations ever uncovered. Digital advertisers tend to want two things: people to look at their ads and “premium” websites — i.e., established and legitimate publications — on which to host them. The two schemes at issue in the case, dubbed Methbot and 3ve by the security researchers who found them, faked both. Hucksters infected 1.7 million computers with malware that remotely directed traffic to “spoofed” websites — “empty websites designed for bot traffic” that served up a video ad purchased from one of the internet’s vast programmatic ad-exchanges, but that were designed, according to the indictments, “to fool advertisers into thinking that an impression of their ad was served on a premium publisher site,” like that of Vogue or The Economist.

Views, meanwhile, were faked by malware-infected computers with marvelously sophisticated techniques to imitate humans: bots “faked clicks, mouse movements, and social network login information to masquerade as engaged human consumers.” Some were sent to browse the internet to gather tracking cookies from other websites, just as a human visitor would have done through regular behavior. Fake people with fake cookies and fake social-media accounts, fake-moving their fake cursors, fake-clicking on fake websites — the fraudsters had essentially created a simulacrum of the internet, where the only real things were the ads.

How much of the internet is fake? Studies generally suggest that year after year, less than 60 percent of web traffic is human; some years, according to some researchers, a healthy majority of it is bot. For a period of time in 2013, the Times reported this year, a full half of YouTube traffic was “bots masquerading as people,” a portion so high that employees feared an inflection point after which YouTube’s systems for detecting fraudulent traffic would begin to regard bot traffic as real and human traffic as fake. They called this hypothetical event “the Inversion.”

In the future, when I look back from the high-tech gamer jail in which President PewDiePie will have imprisoned me, I will remember 2018 as the year the internet passed the Inversion, not in some strict numerical sense, since bots already outnumber humans online more years than not, but in the perceptual sense. The internet has always played host in its dark corners to schools of catfish and embassies of Nigerian princes, but that darkness now pervades its every aspect: Everything that once seemed definitively and unquestionably real now seems slightly fake; everything that once seemed slightly fake now has the power and presence of the real. The “fakeness” of the post-Inversion internet is less a calculable falsehood and more a particular quality of experience — the uncanny sense that what you encounter online is not “real” but is also undeniably not “fake,” and indeed maybe both at once, or in succession, as you turn it over in your head.

How Much of the Internet Is Fake? The metrics are fake.

Take something as seemingly simple as how we measure web traffic. Metrics should be the most real thing on the internet: They are countable, trackable, and verifiable, and their existence undergirds the advertising business that drives our biggest social and search platforms. Yet not even Facebook, the world’s greatest data–gathering organization, seems able to produce genuine figures. In October, small advertisers filed suit against the social-media giant, accusing it of covering up, for a year, its significant overstatements of the time users spent watching videos on the platform (by 60 to 80 percent, Facebook says; by 150 to 900 percent, the plaintiffs say). According to an exhaustive list at MarketingLand, over the past two years Facebook has admitted to misreporting the reach of posts on Facebook Pages (in two different ways), the rate at which viewers complete ad videos, the average time spent reading its “Instant Articles,” the amount of referral traffic from Facebook to external websites, the number of views that videos received via Facebook’s mobile site, and the number of video views in Instant Articles.

Can we still trust the metrics? After the Inversion, what’s the point? Even when we put our faith in their accuracy, there’s something not quite real about them: My favorite statistic this year was Facebook’s claim that 75 million people watched at least a minute of Facebook Watch videos every day — though, as Facebook admitted, the 60 seconds in that one minute didn’t need to be watched consecutively. Real videos, real people, fake minutes.

How Much of the Internet Is Fake? The people are fake.

And maybe we shouldn’t even assume that the people are real. Over at YouTube, the business of buying and selling video views is “flourishing,” as the Times reminded readers with a lengthy investigation in August. The company says only “a tiny fraction” of its traffic is fake, but fake subscribers are enough of a problem that the site undertook a purge of “spam accounts” in mid-December. These days, the Times found, you can buy 5,000 YouTube views — 30 seconds of a video counts as a view — for as low as $15; oftentimes, customers are led to believe that the views they purchase come from real people. More likely, they come from bots. On some platforms, video views and app downloads can be forged in lucrative industrial counterfeiting operations. If you want a picture of what the Inversion looks like, find a video of a “click farm”: hundreds of individual smartphones, arranged in rows on shelves or racks in professional-looking offices, each watching the same video or downloading the same app.

This is obviously not real human traffic. But what would real human traffic look like? The Inversion gives rise to some odd philosophical quandaries: If a Russian troll using a Brazilian man’s photograph to masquerade as an American Trump supporter watches a video on Facebook, is that view “real”? Not only do we have bots masquerading as humans and humans masquerading as other humans, but also sometimes humans masquerading as bots, pretending to be “artificial-intelligence personal assistants,” like Facebook’s “M,” in order to help tech companies appear to possess cutting-edge AI. We even have whatever CGI Instagram influencer Lil Miquela is: a fake human with a real body, a fake face, and real influence. Even humans who aren’t masquerading can contort themselves through layers of diminishing reality: The Atlantic reports that non-CGI human influencers are posting fake sponsored content — that is, content meant to look like content that is meant to look authentic, for free — to attract attention from brand reps, who, they hope, will pay them real money.

How Much of the Internet Is Fake? The businesses are fake.

The money is usually real. Not always — ask someone who enthusiastically got into cryptocurrency this time last year — but often enough to be an engine of the Inversion. If the money is real, why does anything else need to be? Earlier this year, the writer and artist Jenny Odell began to look into an Amazon reseller that had bought goods from other Amazon resellers and resold them, again on Amazon, at higher prices. Odell discovered an elaborate network of fake price-gouging and copyright-stealing businesses connected to the cultlike Evangelical church whose followers resurrected Newsweek in 2013 as a zombie search-engine-optimized spam farm. She visited a strange bookstore operated by the resellers in San Francisco and found a stunted concrete reproduction of the dazzlingly phony storefronts she’d encountered on Amazon, arranged haphazardly with best-selling books, plastic tchotchkes, and beauty products apparently bought from wholesalers. “At some point I began to feel like I was in a dream,” she wrote. “Or that I was half-awake, unable to distinguish the virtual from the real, the local from the global, a product from a Photoshop image, the sincere from the insincere.”

How Much of the Internet Is Fake? The content is fake.

The only site that gives me that dizzying sensation of unreality as often as Amazon does is YouTube, which plays host to weeks’ worth of inverted, inhuman content. TV episodes that have been mirror-flipped to avoid copyright takedowns air next to huckster vloggers flogging merch who air next to anonymously produced videos that are ostensibly for children. An animated video of Spider-Man and Elsa from Frozen riding tractors is not, you know, not real: Some poor soul animated it and gave voice to its actors, and I have no doubt that some number (dozens? Hundreds? Millions? Sure, why not?) of kids have sat and watched it and found some mystifying, occult enjoyment in it. But it’s certainly not “official,” and it’s hard, watching it onscreen as an adult, to understand where it came from and what it means that the view count beneath it is continually ticking up.

These, at least, are mostly bootleg videos of popular fictional characters, i.e., counterfeit unreality. Counterfeit reality is still more difficult to find—for now. In January 2018, an anonymous Redditor created a relatively easy-to-use desktop-app implementation of “deepfakes,” the now-infamous technology that uses artificial-intelligence image processing to replace one face in a video with another — putting, say, a politician’s over a porn star’s. A recent academic paper from researchers at the graphics-card company Nvidia demonstrates a similar technique used to create images of computer-generated “human” faces that look shockingly like photographs of real people. (Next time Russians want to puppeteer a group of invented Americans on Facebook, they won’t even need to steal photos of real people.) Contrary to what you might expect, a world suffused with deepfakes and other artificially generated photographic images won’t be one in which “fake” images are routinely believed to be real, but one in which “real” images are routinely believed to be fake — simply because, in the wake of the Inversion, who’ll be able to tell the difference?

how much of the internet is fake
Only 4% of the Internet is indexed by Google

How Much of the Internet Is Fake? Our politics are fake.

Such a loss of any anchoring “reality” only makes us pine for it more. Our politics have been inverted along with everything else, suffused with a Gnostic sense that we’re being scammed and defrauded and lied to but that a “real truth” still lurks somewhere. Adolescents are deeply engaged by YouTube videos that promise to show the hard reality beneath the “scams” of feminism and diversity — a process they call “red-pilling” after the scene in The Matrix when the computer simulation falls away and reality appears. Political arguments now involve trading accusations of “virtue signaling” — the idea that liberals are faking their politics for social reward — against charges of being Russian bots. The only thing anyone can agree on is that everyone online is lying and fake.

We ourselves are fake.

Which, well. Everywhere I went online this year, I was asked to prove I’m a human. Can you retype this distorted word? Can you transcribe this house number? Can you select the images that contain a motorcycle? I found myself prostrate daily at the feet of robot bouncers, frantically showing off my highly developed pattern-matching skills — does a Vespa count as a motorcycle, even? — so I could get into nightclubs I’m not even sure I want to enter. Once inside, I was directed by dopamine-feedback loops to scroll well past any healthy point, manipulated by emotionally charged headlines and posts to click on things I didn’t care about, and harried and hectored and sweet-talked into arguments and purchases and relationships so algorithmically determined it was hard to describe them as real.

Where does that leave us? I’m not sure the solution is to seek out some pre-Inversion authenticity — to red-pill ourselves back to “reality.” What’s gone from the internet, after all, isn’t “truth,” but trust: the sense that the people and things we encounter are what they represent themselves to be. Years of metrics-driven growth, lucrative manipulative systems, and unregulated platform marketplaces have created an environment where it makes more sense to be fake online — to be disingenuous and cynical, to lie and cheat, to misrepresent and distort — than it does to be real. Fixing that would require cultural and political reform in Silicon Valley and around the world, but it’s our only choice. Otherwise, we’ll all end up on the bot internet of fake people, fake clicks, fake sites, and fake computers, where the only real thing is the ads.

A version of this article appeared in the December 24, 2018, issue of New York Magazine.

How can you recover criminal assets held in cryptocurrencies?

How can you recover criminal assets held in cryptocurrencies? If you’re pursuing a recalcitrant debtor or sophisticated fraudster who happens to be using cryptocurrencies such as Bitcoin, you might feel as though you’ve hit a dead end. How can you recover assets from someone who has specifically gone out of their way to hide their wealth in digital currencies? Well, as Burford’s recent case study shows, options are available and paths to recovery exist, so long as you know who to ask and where to look.

What is cryptocurrency?

Bitcoin (BTC) is a digital currency based on a protocol that allows data to be stored in a transparent and unalterable way in a decentralised ledger, essentially, a database that everyone has a copy of, known as the Blockchain. Bitcoin is the first example of a cryptocurrency, an asset class similar to that of traditional “fiat” currencies, but whose supply is controlled by lines of code, rather than central banks. Transactions are verified using cryptography. Cryptocurrencies are typically traded via exchanges, which act as digital marketplaces connecting buyers with sellers, and stored using digital wallets.

Much has been written about the anonymity of cryptocurrencies and Bitcoin and the unbreakable cryptographic verification and encryption used to secure records of transactions in the blockchain. It is no secret that this is why it is often favoured by criminals, techies and investors alike. In fact, it is precisely because if this underlying technology that every Bitcoin transaction of any size is publicly viewable, provided of course that you know what you are looking for!



Our case study concerns two UK residents (for our purposes here let’s refer to them as Smith and Jones) who ran a series of fraudulent schemes that netted them tens of millions of pounds. After the frauds were uncovered, Smith and Jones entered insolvency, no doubt hoping to come out clean the other side upon discharge. Needless to say, Smith and Jones were not forthright with the trustees and the investigation uncovered neither assets recoupable to the estate nor answers concerning the whereabouts of the millions in pilfered investor funds. If the trustees were to make any recovery, it was clear that a highly complex, long-running and expensive investigation would be required.

The trustees sought help from Burford and its team of asset recovery specialists. We have experience partnering with resource-strapped insolvency estates, in which our role typically entails identifying assets, formulating a legal route to recovery, and funding the ultimate recovery of those assets. In the case of our debtors, Smith and Jones, it became clear early on in our investigation that they may have squirreled away some of the fraud proceeds into cryptocurrencies. Far from being a dead-end, we uncovered actionable intelligence with the use of traditional tools in the insolvency war chest that lead us to new third-parties, offshore accounts, and sight of fund flows suggestive of Smith and Jones’s access to substantial liquid assets that they did not disclose to the trustees.


We had reason to suspect that Smith and Jones held undisclosed cryptocurrency assets. Our initial desk-based research identified various domains for bitcoin investment websites registered by known proxies of Smith and Jones at around the time of their bankruptcies when they were registering other offshore businesses which were used to siphon proceeds of the fraud. We knew from their backgrounds that Smith and Jones were tech-savvy investors, so it seemed possible that they knew their BTC from their ETH, and how cryptocurrency investing and trading could make them some money.

Our suspicions were confirmed when we obtained documents which appeared to show payments, albeit nominal sums, being made by Smith and Jones to UK-based cryptocurrency exchanges. We got hold of these documents via a combination of traditional disclosure orders and overseas discovery mechanisms. What we did not understand, however, was why the exchanges also appeared to be making payments back to Smith and Jones.

We approached one of the UK-based exchanges for disclosure of all records relating to Jones, Smith and their proxies. The exchange disclosed a host of information, including Bitcoin wallets, Bitcoin addresses, and transaction IDs, information which (aside from the exchange) is only usually held by the holder of the Bitcoins or the parties in a Bitcoin transaction. Bitcoin addresses, for instance, are like email addresses, but instead of sending messages they are used to send and receive Bitcoin. The exchange also disclosed details of a six-figure sum deposited into the exchange from an offshore account held by a third-party company incorporated in a banking secrecy jurisdiction. Crucially, the exchange disclosed that the contact on the account and the owner of the company was Smith and Jones’s known proxy.


What our analysis showed was that Smith and Jones’s bagman attempted to pay a UK bitcoin exchange on three separate occasions: first, direct from his Belize bank account; second, from a Dubai account held by a third-party company; and third, via a third-party payment processor. On the first two occasions, the payment was returned because the bagman failed the exchange’s KYC and AML tests. By verifying the bitcoin addresses and transaction IDs associated with his third payment in the blockchain (at www.blockchain.com/explorer) and cross-referencing with the payment dates we saw on various account statements, we were able to map Smith and Jones’s crypto fund flows, transactions from one wallet/address to another, during the period of their insolvencies.


Ultimately, our inkling that Smith and Jones held digital assets lead us to seek disclosure from cryptocurrency agents, such as exchanges, that ultimately lead us to previously undisclosed bank accounts, third-party companies and domains outside of the jurisdiction. This crucial intelligence contributed to our overall understanding of Smith and Jones’s modus operandi and our global enforcement strategy, particularly when used in conjunction with the traditional powers available to insolvency practitioners.

As with most investigations, new leads typically raise more questions than they do answers, but hopefully, this case study shows that Bitcoin should never be a dead-end. Information can be gleaned about cryptocurrencies and can progress your investigation, provided you’re knocking on the right doors!

Rupert Black, author (and son of Andy Black)   This article was first published on KNect365 Law and is available here.

Author is Rupert Black an analyst at Burford Capital

The impact of bots on opinions in social networks

The impact of bots on opinions in social networks. Social networks have given us the ability to spread messages and influence large populations very easily. Malicious actors can take advantage of social networks to manipulate opinions using artificial accounts, or bots. It is suspected that the 2016 U.S. presidential election was the victim of such social network interference, potentially by foreign actors. Foreign influence bots are also suspected of having attacked European elections. Multiple research studies confirm the bots main action was the sharing of politically polarized content in an effort to shift opinions.  The potential threat to election security from social networks has become a concern for governments around the world.

In the U.S., Members of Congress have not been satisfied with the response of major social networks and have asked them to take actions to prevent future interference in the U.S. democratic process by foreign actors. In response, major social media companies have taken serious steps. Facebook has identified several pages and accounts tied to foreign actors and Twitter suspended over 70 million bot accounts.

Despite all of the efforts taken to counter the threat posed by bots, one important question remains unanswered: how many people were impacted by these influence campaigns? More generally, how can we quantify the effect of bots on the opinions of users in a social network? Answering this question would allow one to assess the potential threat of an influence campaign. Also, it would allow one to test the efficacy of different responses to the threat. Studies have looked at the volume of content produced by bots and their social network reach during the 2016 election. However, this data alone does not indicate the effectiveness of the bots in shifting opinions.

The challenge is we do not know what would have happened if the bots had not been there. Such a counterfactual analysis is only possible if there is a model which can predict the opinions of users in the presence or absence of bots. For a model to be useful in assessing the impact of bots, it must be validated on real social network data. Once validated, an opinion model can then be used to assess the impact of different groups of bots.

The Impact of Bots on Opinions in Social Networks
Visualization of the network of Twitter users discussing the second 2016 presidential debate. Node sizes are proportional to their follower-count in the network and node colors indicate their tweet based opinion. Nodes favoring Trump are red and nodes favoring Clinton are blue.

A recent research report by the Massachusetts Institute of Technology (MIT) presented a method to quantify the impact of bots on the opinions of users in a social network. MIT focused the analysis on a network of Twitter users discussing the 2016 presidential election between Hillary Clinton and Donald Trump. The key strategy used was to find a model for opinion dynamics in a social network. Firstly, MIT validated the model by showing that the user opinions predicted by the model align with the opinions of these users’ based on their social media posts. Secondly, MIT identified bots in the network using a developed and customised algorithm. Thirdly, MIT used the opinion model to calculate how the opinions shift when they removed the bots from the network.

MIT discovered that a small number of bots have a disproportionate impact on the network opinions, and this impact is primarily due to their elevated activity levels. In the dataset, MIT found that the bots which supported Clinton caused a bigger shift in opinions than the bots which supported Trump, even though there are more Trump bots in the network.

The Digital Influence Machine

The Digital Influence Machine. In light of how the advertising capabilities of Facebook, Twitter, and other social networks have been used in recent political elections across the world. A new report, argues that today’s digital advertising infrastructure creates disturbing new opportunities for political manipulation and other forms of anti-democratic strategic communication. As ad platforms, web publishers, and other intermediaries have developed an infrastructure of data collection and targeting capacities that the report calls the Digital Influence Machine (DIM).

The DIM incorporates a set of overlapping technologies for surveillance, targeting, testing, and automated decision-making designed to make advertising – from the commercial to the political more powerful and efficient. The report claims the DIM can identify and target weak points where groups and individuals are most vulnerable to strategic influence and is a form of information warfare.

The Digital Influence Machine

The Digital Influence Machine. Unlike campaigns of even a decade ago, data-driven advertising allows political actors to zero in on those believed to be the most receptive and pivotal audiences for very specific messages while also helping to minimize the risk of political blowback by limiting their visibility to those who might react negatively.

The various technologies and entities of the Digital Influence Machine cohere around three interlocking communication capacities:

  • To use sprawling systems of consumer monitoring to develop detailed consumer profiles
  • To target customised audiences with strategic messaging across devices, channels, and contexts
  • To automate and optimise tactical elements of influence campaigns, leveraging consumer data and real-time feedback to test and tweak key variables including the composition of target publics and the timing, placement, and content of ad messages

The social influence of the DIM, like all technological systems, is also largely a product of the political, economic, and social context in which it developed. The report analysed three key shifts in the US media and political landscape that contextualise the use of the DIM to manipulate political activity:

  • The decline of professional journalism
  • The expansion of financial resources devoted to political influence
  • The growing sophistication of targeted political mobilization in a regulatory environment with little democratic accountability

The report documented three distinct strategies that political actors currently use to weaponise the DIM:

  • Mobilize supporters through identity threats
  • Divide an opponent’s coalition
  • Leverage influence techniques informed by behavioral science

Despite this range of techniques, weaponised political ad targeting will rarely, if ever, be effective in changing individuals’ deeply-held beliefs. Instead, the goals of weaponised DIM campaigns will be to amplify existing resentments and anxieties, raise the emotional stakes of particular issues or foreground some concerns at the expense of others, stir distrust among potential coalition partners, and subtly influence decisions about political behaviors (like whether to go vote or attend a protest). In close elections, if these tactics offer even marginal advantages, groups willing to engage in ethically dubious machinations may reap significant benefits.

The report suggested that key points of intervention for mitigating harms are the technical structures, institutional policies, and legal regulations of the DIM. One significant further step companies could take would be to categorically refuse to work with dark money groups. Platforms could also limit weaponisation by requiring explicit, non-coercive user consent for viewing any political ads that are part of a split-testing experiment. Future ethical guidelines for political advertising could be developed in collaboration with independent committees representing diverse communities and stakeholders. All of these possible steps have benefits, risks, and costs, and should be thoroughly and seriously considered by corporations, regulators, and civil society.

The report concluded that whatever the future of online ad regulation, the consideration of political ads will only be one component in a larger effort to combat disinformation and manipulation. Without values like fairness, justice, and human dignity guiding the development of the DIM and a commitment to transparency and accountability underlying its deployment, such systems are antithetical to the principles of democracy.



Andy Black Associates has been awarded and officially listed as a G-Cloud 10 (G10) cloud service provider for UK government

Andy Black Associates has been awarded and officially listed as a G-Cloud 10 (G10) cloud hosting service provider for its suite of digital services for Parish Councils and local government. G10 services will become available on the Digital Marketplace on 2nd July 2018.

The digital transformation of Parish Councils has begun. Parish Councils originated in medieval times and are the first level of government for UK citizens. Andy Black Associates provide Parish Councils with a low-cost, easy-to-use and customisable hosted WordPress website template, specifically designed for Parish Councils, that will enable them to improve engagement with the local community, comply with the 2015 Transparency Code and provide a better service for parishioners. The hosted website is also fully responsive when viewed on a mobile device.

The hosted cloud software-as-a-service for Parish Councils includes monthly backups, data storage, data security, support and access to a streamed video e-learning library that enables Parish Council members to easily learn how to customise their sites, enabling value added services such as how to add the minutes of meetings, how to create an email newsletter, how to integrate social media or how to add YouTube content.

The service was developed and iterated over the last year by collaborating with parish clerks, parish councillors and local government officers and is currently being rolled out by the Hereford Association of Local Councils, where over 50 Parish Councils have already adopted the cloud service. Some “early adopters” in this group are starting to develop their Parish Council websites into community hubs.

Andy Black Associates awarded G-Cloud 10 provider for UK government.
Google Maps integration allows virtual walk-throughs of building applications

Lynda Wilcox, the Chief Executive of Hereford Association of Local Councils, said “The Parish Councils in Hereford using the service have already noticed an increase in the number of parishioners attending meetings, more engagement with older parishioners by email and also more younger parishioners turning up at meetings wanting to get involved in local democracy.”

Mark Millmore, ABA Director of Hosted Services, said “Our low-cost and easy-to-use hosted website template and hosted cloud service can be easily rolled out to any of the 8,356 Parish Councils in England and G-Cloud will be an important route for us to reach these government organisations.

Our software-as-a-service (SaaS) business model will enable Parish Councils to improve their service to the local community and allow significant savings from the Central Government budget allocated to the National Association of Local Councils (NALC’s) and its 38 independent County Associations for Transparency Code compliance for each of the 8,356 Parish Councils under their administration.

The ABA pricing matrix for Parish Council websites being offered to NALC and to each of the 38 independent County Associations is a one-off fee of £500 each for 1-10 websites, £400 each for 11-30 websites, £300 each for 31-50 websites, £250 each for 50+ websites and £200 each for 100+ websites, after the first year there is a £100 annual fee for each website that covers support, maintenance updates and backups. Our low-cost and easy-to-use cloud service will help Parish Councils comply with the Transparency Code and provide a better service to the local community.

Take a look at some examples of our Parish Council websites:

As such, we are delighted to have been awarded a place on the G10 Agreement, the latest iteration of G-Cloud, and can’t wait to take advantage of the many opportunities that the initiative offers for both suppliers and government bodies.”

Mark Millmore can be contacted on 07891108154 for further information.

G-Cloud is a Crown Commercial Service (CCS) initiative to encourage public sector adoption of cloud services by connecting government organisations with providers of all sizes in a secure and open environment. The CCS acts on behalf of the Crown to drive savings for the taxpayer and improve the quality of commercial and procurement activity across both local and central government.

To qualify for inclusion in G10, organisations need to prove that they are a suitable and secure potential partner for government technology projects. They must be prepared to list the capabilities of their products, along with indicative pricing. As a result, G10 provides public sector bodies with an open, secure and transparent digital marketplace in which to search for cloud solutions.

It also provides new business opportunities to businesses that pass the checks required to qualify for G10 status. Crown Commercial Service suppliers are given an opportunity to advertise their services to a wide range of interested public sector bodies in a competitive environment. Since it became available in 2012, UK government organisations have placed billions of pounds’ worth of orders through the service with most orders being won by SME’s.

Mueller & Russian meddling – an inconvenient truth in the age of digital marketing

Mueller, Russian meddling and digital marketing

This fascinating and informative article is by the blogger Moon of Alabama.

“Last week the U.S. Justice Department indicted the Russian Internet Research Agency on some dubious legal grounds. It covers thirteen Russian people and three Russian legal entities. The main count of the indictment is an alleged “Conspiracy to Defraud the United States”.

The published indictment gives support to Moon of Alabama’s long-held belief that there was no “Russian influence” campaign during the U.S. election. What is described and denounced as such was instead a commercial marketing scheme which ran click-bait websites to generate advertisement revenue and created online crowds around virtual persona to promote whatever its commercial customers wanted to promote. The size of the operation was tiny when compared to the hundreds of millions in campaign expenditures. It had no influence on the election outcome.

The indictment is fodder for the public to prove that the Mueller investigation is “doing something”. It distracts from further questioning the origin of the Steele dossier. It is full of unproven assertions and assumptions. It is a sham in that none of the Russian persons or companies indicted will ever come in front of a U.S. court. That is bad because the indictment is built on the theory of a new crime which, unless a court throws it out, can be used to incriminate other people in other cases and might even apply to this blog. The latter part of this post will refer to that.

In the early 1990s, some dude in St.Petersburg made a good business selling hot dogs. He opened a colourful restaurant. Local celebrities and politicians were invited to gain notoriety while the restaurant served cheap food at too high prices. It was a good business. A few years later he moved to Moscow and gained contracts to cater to schools and to the military. The food he served was still substandard.

But catering bad food as school lunches gave him, by chance, the idea for a new business:

Parents were soon up in arms. Their children wouldn’t eat the food, saying it smelled rotten.
As the bad publicity mounted, Mr Prigozhin’s company, Concord Catering, launched a counterattack, a former colleague said. He hired young men and women to overwhelm the internet with comments and blog posts praising the food and dismissing the parents’ protests.

“In five minutes, pages were drowning in comments,” said Andrei Ilin, whose website serves as a discussion board about public schools. “And all the trolls were supporting Concord.”

The trick worked beyond expectations. Prigozhin had found a new business. He hired some IT staff and low paid temps to populate various message boards, social networks and the general internet with whatever his customers asked him for.

Have you a bad online reputation? Prigozhin can help. His internet company will fill the net with positive stories and remarks about you. Your old and bad reputation will be drowned by the new and good one. Want to promote a product or service? Prigozhin’s online marketeers can address the right crowds.


To achieve those results the few temps who worked on such projects needed to multiply their online personalities. It is better to have fifty people vouch for you online than just five. No one cares if these are real people or just virtual ones. The internet makes it easy to create such sock-puppets. The virtual crowd can then be used to push personalities, products or political opinions. Such schemes are nothing new or special. Every decent “western” public relations and marketing company will offer a similar service and has done so for years.

While it is relatively easy to have sock-puppets swamp the comment threads of such sites as this blog, it is more difficult to have a real effect on social networks. These depend on multiplier effects. To gain many real “likes”, “re-tweets” or “followers” an online persona needs a certain history and reputation. Real people need to feel attached to it. It takes some time and effort to build such a multiplier personality, be it real or virtual.

At some point, Prigozhin, or whoever by then owned the internet marketing company, decided to expand into the lucrative English speaking market. This would require to build many English language online persona and to give those some history and time to gain crowds of followers and a credible reputation. The company sent a few of its staff to the U.S. to gain some impressions, pictures and experience of the surroundings. They would later use these to impersonate as U.S. locals. It was a medium size, long-term investment of maybe a hundred-thousand bucks over two or three years.

The U.S. election provided an excellent environment to build reputable online persona with large followings of people with discriminable mindsets. The political affinity was not important. The personalities only had to be very engaged and stick to their issue – be it left or right or whatever. The sole point was to gain as many followers as possible who could be segmented along social-political lines and marketed to the companies customers.

Again – there is nothing new to this. It is something hundreds, if not thousands of companies are doing as their daily business. The Russian company hoped to enter the business with a cost advantage. Even its mid-ranking managers were paid as little as $1,200 per month. The students and other temporary workers who would ‘work’ the virtual personas as puppeteers would earn even less. Any U.S. company in a similar business would have higher costs.

In parallel to building virtual online persona the company also built some click-bait websites and groups and promoted these through mini Facebook advertisements. These were the “Russian influence ads” on Facebook the U.S. media were so enraged about. They included the promotion of a Facebook page about cute puppies. Back in October, we described how those “Russian influence” ads (most of which were shown after the election or were not seen at all) were simply part of a commercial scheme:

The pages described and the ads leading to them are typical click-bait, not part of a political influence op.

One builds pages with “hot” stuff that hopefully attracts lots of viewers. One creates ad-space on these pages and fills it with Google ads. One attracts viewers and promotes the spiked pages by buying $3 Facebook mini-ads for them. The mini-ads are targeted at the most susceptible groups.
A few thousand users will come and look at such pages. Some will ‘like’ the puppy pictures or the rant for or against LGBT and further spread them. Some will click the Google ads. Money then flows into the pockets of the page creator. One can rinse and repeat this scheme forever. Each such page is a small effort for a small revenue. But the scheme is highly scalable and parts of it can be automatized.

Because of the myriad of U.S. sanctions against Russia, the monetization of these business schemes required some creativity. One can easily find the name of a real U.S. person together with the assigned social security number and its date of birth. Those data are enough to open, for example, a Paypal account under a U.S. name. A U.S. customer of the cloaked Russian Internet company could then pay to the Paypal account and the money could be transferred from there to Moscow. These accounts could also be used to buy advertising on Facebook. The person whose data was used to create the account would never learn of it and would have no loss or other damage. Another scheme is to simply pay some U.S. person to open a U.S. bank account and to then hand over the ‘keys’ to that account.

The Justice Department indictment is quite long and detailed. It must have been expensive. If you read it do so with the above in mind. Skip over the assumptions and claims of political interference and digest only the facts. All that is left is, as explained, a commercial marketing scheme.

I will not go into all its detail of the indictment but here are some points that support the above description.

Point 4:

Defendants, posing as US. persons and creating false U.S. personas, operated social media pages and groups designed to attract U.S. audiences. These groups and pages, which addressed divisive US. political and social issues, falsely claimed to be controlled by US. activists when, in fact, they were controlled by Defendants. Defendants also used the stolen identities of real U.S. persons to post on social media accounts. Over time, these social media accounts became Defendants’ means to reach significant numbers of Americans …
Point 10d:

By in or around April 2014, the ORGANIZATION formed a department that went by various names but was at times referred to as the “translator project.” This project focused on the US. population and conducted operations on social media platforms such as YouTube, Facebook, Instagram, and Twitter. By approximately July 2016, more than eighty ORGANIZATION employees were assigned to the translator project.
(Some U.S. media today made the false claim that $1.25 million per month spent by the company for its U.S. campaign. But Point 11 of the indictment says that the company ran a number of such projects directed at a Russian audience while only the one described in 10d above is aimed at a U.S. audience. All these projects together had a monthly budget of $1.25 million.)

(Point 17, 18 and 19 indict individual persons who have worked for the “translator” project” “to at least in and around [some month] 2014”. It is completely unclear how these persons, who seem to have left the company two years before the U.S. election, are supposed to have anything to do with the claimed “Russian influence” on the U.S. election and the indictment.)

Point 32:

Defendants and their co-conspirators, through fraud and deceit, created hundreds of social media accounts and used them to develop certain fictitious U.S. personas into “leader[s] of public opinion” in the United States.
The indictment then goes on and on describing the “political activities” of the sock-puppet personas. Some posted pro-Hillary slogans, some anti-Hillary stuff, some were pro-Trump, some anti-everyone, some urged not to vote, others to vote for third party candidates. The sock-puppets did not create or post fake news. They posted mainstream media stories.

Some of the personas called for going to anti-Islam rallies while others promoted pro-Islam rallies. The Mueller indictment lists a total of eight rallies. Most of these did not take place at all. No one joined the “Miners For Trump” rallies in Philly and Pittsburgh. A “Charlotte against Trump” march on November 19 – after the election – was attended by one hundred people. Eight people came for a pro-Trump rally in Fort Myers.

The sock-puppets called for rallies to establish themselves as ‘activist’ and ‘leadership’ persona, to generate more online traffic and additional followers. There was, in fact, no overall political trend in what the sock-puppets did. The sole point of all such activities was to create a large total following by having multiple personas which together covered all potential social-political strata.

At Point 86 the indictment turns to Count Two – “Conspiracy to Commit Wire Fraud and Bank Fraud”. The puppeteers opened, as explained above, various Paypal accounts using ‘borrowed’ data.

Then comes the point which confirms the commercial marketing story as laid out above:

Point 95:

Defendants and their co-conspirators also used the accounts to receive money from real U.S. persons in exchange for posting promotions and advertisements on the ORGANIZATION-controlled social media pages. Defendants and their co-conspirators typically charged certain U.S. merchants and U.S. social media sites between 25 and 50 U.S. dollars per post for promotional content on their popular false U.S. persona accounts, including Being Patriotic, Defend the 2nd, and Blacktivist.
There you have it. There was no political point to what the Russian company did. Whatever political slogans one of the company’s sock-puppets posted had only one aim: to increase the number of followers for that sock-puppet. The sole point of creating a diverse army of sock-puppets with large following crowds was to sell the ‘eyeballs’ of the followers to the paying customers of the marketing company.

There were, according to the indictment, eighty people working on the “translator project”. These controlled “hundreds” of sock-puppets online accounts each with a distinct “political” personality. Each of these sock-puppets had a large number of followers – in total several hundred-thousands. Now let’s assume that one to five promotional posts can be sold per day on each of the sock-puppets content streams. The scheme generates several thousand dollars per day ($25 per promo, hundreds of sock-puppets, 1-5 promos per day per sock-puppet). The costs for this were limited to the wages of up to eighty persons in Moscow, many of the temps, of which the highest paid received some $1,000 per month. While the upfront multiyear investment to create and establish the virtual personas was probably significant, this likely was, overall, a profitable business.

Again – this had nothing to do with political influence on the election. The sole point of political posts was to create ‘engagement‘ and a larger number of followers in each potential social-political segment. People who buy promotional posts want these to be targeted at a specific audience. The Russian company could offer whatever audience was needed. It had sock-puppets with a pro-LGBT view and a large following and sock-puppets with anti-LGBT views and a large following. It could provide pro-2nd amendment crowds as well as Jill Stein followers. Each of the sock-puppets had over time generated a group of followers that were like-minded. The entity buying the promotion simply had to choose which group it preferred to address.

The panic of the U.S. establishment over the loss of their preferred candidate created an artificial storm over “Russian influence” and assumed “collusion” with the Trump campaign. (Certain Democrats though, like Adam Schiff, profit from creating a new Cold War through their sponsoring armament companies.)

The Mueller investigation found no “collusion” between anything Russia and the Trump campaign. The indictment does not mention any. The whole “Russian influence” storm is based on a misunderstanding of commercial activities of a Russian marketing company in U.S. social networks.

There is a danger in this. The indictment sets up a new theory of nefarious foreign influence that could be applied to even this blog. As U.S. lawyer Robert Barns explains:

The only thing frightening about this indictment is the dangerous and dumb precedent it could set: foreign nationals criminally prohibited from public expression in the US during elections unless registered as foreign agents and reporting their expenditures to the FEC.

Mueller’s new crime only requires 3 elements: 1) a foreign national; 2) outspoken on US social media during US election, and 3) failed to register as a foreign agent or failed to report receipts/expenditures of speech activity. Could indict millions under that theory.

The legal theory of the indictment for most of the defendants and most of the charges alleges that the “fraud” was simply not registering as a foreign agent or not reporting expenses to the FEC because they were a foreign national expressing views in a US election.
Author Leonid Bershidsky, who writes for Bloomberg, remarks:

“I’m actually surprised I haven’t been indicted. I’m Russian, I was in the U.S. in 2016 and I published columns critical of both Clinton and Trump w/o registering as a foreign agent.”

As most of you will know your author writing this is German. I write pseudo-anonymously for a mostly U.S. audience. My postings are political and during the U.S. election campaign expressed an anti-Hillary view. The blog is hosted on U.S, infrastructure paid for by me. I am not registered as Foreign Agent or with the Federal Election Commission.

Under the theory on which the indictment is based I could also be indicted for a similar “Conspiracy to Defraud the United States”.

(Are those of you who kindly donated to this blog co-conspirators?)

When Yevgeni Prigozhin, the hot dog caterer who allegedly owns the internet promotion business, was asked about the indictment he responded:

“The Americans are really impressionable people, they see what they want to see. […] If they want to see the devil, let them see him.”

Syrians are creating “parish councils” to restore grassroots democracy

You may think Syrians are trapped between a rock and a hard place and face a choice between Bashar Al Assad and the jihadists. But the real choice being fought out by Syrians is between violent authoritarianism on the one hand and grassroots democracy on the other. Syrians are creating “parish councils” to help restore civil society.

When Robin Yassin-Kassab interviewed activists, fighters and refugees for his book Burning Country: Syrians in Revolution and War, he discovered the democratic option is real, even if beleaguered. To the extent that life continues in the liberated but heavily bombed areas – areas independent of both the Assad regime and ISIL – it continues because self-organised local councils are supplying services and aid.

Syrians are creating parish councils to restore grassroots democracy

On 18 July 2017 women and men in Saraqib, eastern Idlib, participated in elections for their local council. According to the election commission 2475 people cast their ballot, 55 percent of eligible voters. Just days earlier, the three candidates had held a lively public debate. This is unheard of in ‘Assad’s Syria’ where free elections have not been held in five decades of dictatorship. And this is the alternative to the regime – self-organization, democracy and local autonomy – not ISIL and not foreign occupation.

Another example is Daraya, a suburb west of Damascus suffering under starvation siege, is run by a council. Its 120 members select executives by vote every six months. The council head is chosen by public election. The council runs schools, a hospital,and a public kitchen, and manages urban agricultural production. Its office supervises the Free Syrian Army militias defending the town. Amid constant bombardment, Daraya’s citizen journalists produce a newspaper, Enab Baladi, which promotes non-violent resistance. In a country once known as a “kingdom of silence”, there are more than 60 independent newspapers and many free radio stations.

And as soon as the bombing eases, people return to the streets with their banners. Recent demonstrations against Jabhat Al Nusra across Idlib province indicate that the Syrian desire for democracy burns as fiercely as ever.

Where possible, the local councils are democratically elected – the first free elections in half a century. Omar Aziz, a Syrian economist and anarchist, provided the germ. In the revolution’s eighth month he published a paper advocating the formation of councils in which citizens could arrange their affairs free of the tyrannical state. Aziz helped set up the first bodies, in suburbs of Damascus. He died in regime detention in 2013, a month before his 64th birthday. But by then, councils had sprouted all over the country.

Some council members were previously involved in the revolution’s original grassroots formations. They were activists, responsible first for coordinating protests and publicity, then for delivering aid and medicine. Other members represented prominent families or tribes, or were professionals selected for specific practical skills.

In regime-controlled areas, councils operate in secret. But in liberated territory people can organise publicly. These are tenacious but fragile experiments. Some are hampered by factionalism. Some are bullied out of existence by jihadists.

Manbij, a northern city, once boasted its own 600-member legislature and 20-member executive, a police force, and Syria’s first independent trade union. Then ISIL seized the grain silos and the democrats were driven out. Today Manbij is called “Little London” for its preponderance of English-accented jihadists.

In some areas the councils appear to signal Syria’s atomisation rather than a new beginning. Christophe Reuter calls it a “revolution of localists” when he describes “village republics””such as Korin, in Idlib province, with its own court and a 10-person council.

But Aziz envisaged councils connecting the people regionally and nationally, and democratic provincial councils now operate in the liberated parts of Aleppo, Idlib and Deraa. In the Ghouta region near Damascus, militia commanders were not permitted to stand as candidates. Fighters were, but only civilians won seats.

In Syria’s three Kurdish-majority areas, collectively known as Rojava, a similar system prevails, though the councils there are known as communes. In one respect they are more progressive than their counterparts elsewhere – 40 per cent of seats are reserved for women. In another, they are more constrained – they work within the larger framework of the PYD, which monopolises control of finances, arms and media.

The elected council members are the only representative Syrians we have. They should be key components in any serious settlement.

In a post-Assad future, local democracy could allow polarised communities to coexist under the Syrian umbrella.

Towns could legislate locally according to their demographic and cultural composition and mood. The alternative to enhanced local control is new borders, new ethnic cleanings, new wars. At the very least, the councils deserve political recognition by the United Nations and others. Council members should be a key presence on the opposition’s negotiating team at any talks.

And the councils deserve protection. Mr Al Assad’s bombs hit the schools, hospitals, bakeries, and residential blocks that the councils are trying desperately to service. If the bombardment were stopped the councils would no longer be limited to survival. They could focus instead on rebuilding Syrian nationhood and further developing popular institutions.

As the US-led invasion of Iraq showed us, only the people themselves can build their democratic structures. And today Syrians are practising democracy, building their own institutions, in the most difficult of circumstances. Their efforts don’t fit in with the easy Assad-or-ISIL narrative, however, and so we rarely deign to notice.

Perhaps Syria looks like a huge, expensive and complicated problem that can only be contained with on-going and continual military action. If this is our only strategy, Syria will fester like an open sore. Perhaps other options are available, if so let’s test them to see if they are feasible.

Andy Black Associates (ABA) provide English parish councils with a specifically designed, low-cost, easy-to-use and customisable WordPress website application, accessed as a cloud service, that enables parish councils to comply with the 2015 Transparency Code and improve engagement with the local community. This type of model can be adapted for Syrian local councils.

Perhaps the UN could set up and manage the cloud service and create a framework where Syrian local councils receive phased financial support for projects to rebuild their local communities in exchange for transparency, local democratic accountability and the creation of local neighbourhood plans. This would help prevent corruption, create local employment and increase grassroots democratic engagement. It could also stem the tide of Syrian refugees into Europe and encourage others to return home to rebuild their country.

Good leaders use emotional intelligence to stop microaggression in the workplace

Good leaders use emotional intelligence to stop microaggression in the workplace. Would your company leadership consider the following to be “good-natured” joking in the office: unwanted “compliments” toward attractive female co-workers, a disabled co-worker being made the subject of some jokes, or a male co-worker being mocked because he isn’t considered masculine enough? No, because these employees are not being made to feel welcome.

Employers should care about this type of workplace behavior not only because they should want to be good corporate citizens, but because this sort of discriminatory behavior is harmful in business. So much so that US legislation like Title VII of the Civil Rights Act of 1964 and The Civil Rights Act of 1991 was passed to address various types of overt workplace discrimination.

But there’s one thing these acts cannot address that you as a leader can: a new form of discrimination called microaggression.

Microaggressions are everyday acts that carry a subtle hint of racism, sexism, or homophobia. I see it encroaching into many workplaces, making professional lives more challenging and leaving a damaging effect on businesses.

andy black associates blog

Corporations are realizing that unconscious bias, a form of microaggression, prevents improvement of workforce diversity and employee productivity. Microaggressions that point to ageism and race can also have harmful effects on employees.

While microaggression is an age-old issue, it cannot be accepted as the norm in the workplace. Incoming generations joining the workforce shouldn’t be left defenseless to stop it. It is important to recognize microaggressions for what they are, manage them effectively, and prevent their damage to performance, productivity, and profitability.

Though it may be a byproduct of diversity, you cannot allow microaggression to consume your people as well as your profit. Recognize where it starts, learn how to micro-manage it and grow with the process.

1. Recognise where it starts.

A microaggression is a subtle way of showing one’s bias and discriminating tendencies. Any statement, joke, or inappropriate inquiry alluding to someone’s gender, race, or even age, can be a sign of a microaggression, especially if it’s said in the context of one’s weakness.

A high turnover rate can also be a sign of microaggressions in the work environment. Any personal attack based on one’s unique qualities can build up feelings of incompetence, inadequacy, or resentment, which leads an employee to underperform and search for a different job.

According to a study by Michigan State University, all organizations should consider the nature and impact of sly or seemingly unintentional forms of discrimination. Racial microaggressions, insidious mistreatment, and exclusion are often discounted because they are vague or cryptic and the perpetrators can argue that they are unintentional. Such experiences, however, can have a significant detrimental effect on employee morale and productivity, resulting in substantial financial losses and even a risk of litigation.

2. Open the lines of communication.

The workplace is every worker’s second home. The environment should make them feel secure and respected, which starts with having an open communication line among employees and with the management.

Create sessions that are intended solely for discussing microaggressions to raise awareness and minimize these behaviors at work. Hold support groups or forums that allow reports of incidences of microaggressions. This could inspire making new policies that are more inclusive and improve the company’s ethical standards.

It’s also good to get workers involved in the community, as it allows them to learn about each other and tackle the bigger problems out there together.

3. Grow with the process. 

Make your workers aware that working for professional success is not an end in itself. A person’s self-worth also comes from the wealth of experience and relationships he or she has built over the years.

A person’s uniqueness can bring more quality and value to the work environment, and it enriches your work-life experience when you can embrace your true self. So honor every worker’s uniqueness in the workplace, especially during occasions such as International Women’s DayLGBT Pride MonthInternational Day of Older PersonsInternational Day of Persons with Disabilities, and of course African-American History Month.

Microaggressions in the workplace will continue to challenge every aspect of a business along with its processes, organizational structure, network of people, ethical standards, and overall success. Regardless of size or nature, every organization should lay a sufficient groundwork for a workplace that secures both its people and the business in the years ahead.

Improving work relationships may not have a numerical value of its own, but a better quality work environment does translate into higher productivity, which if handled correctly, can lead to greater profits. That’s what makes the people who work with you such an invaluable resource.


The digital transformation of Parish Councils has begun

The digital transformation of Parish Councils has begun. Parish Councils originated in medieval times and are the first level of government for UK citizens. They are now adopting cloud computing services to provide a better service for the local community.

We are proud to be helping this transformation. G-Cloud 9 (G9) cloud service provider offering a suite of digital services for Parish Councils and local government. The cloud services became available for public sector institutions via the UK Government Digital Marketplace on 22nd May 2017.

ABA provide Parish Councils with a specifically designed, low-cost, easy-to-use and customisable WordPress website template, accessed as a cloud service, that will enable Parish Councils to comply with the 2015 Transparency Code and improve engagement with the local community.

The digital transformation of Parish Councils has begun
Google Maps integration allows virtual walk-throughs of planning and building applications

The Parish Council website is also fully responsive when viewed on a mobile device. This is particularly important as today web pages are more likely to be viewed on mobiles than on PC’s, and this trend will only accelerate. Younger parishioners are overwhelmingly “mobile-first” and this key demographic will be difficult to engage if a Parish Council website is not mobile-friendly and responsive.


the digital transformation of parish councils has begun

The software-as-a-service for Parish Councils includes monthly backups, data storage, data security and support – ABA manage all the technical infrastructure. This allows parish clerks to focus on managing the parish paperwork and documentation. When compliance information needs to be published, the service is simple to use and parish clerks can easily upload the information to their sites.

Parish clerks will be able to comply with the Transparency Code and easily publish:

  • All items of expenditure above £100
  • End of year accounts and annual governance statements
  • Internal audit reports
  • List of councillor or member responsibilities
  • Details of public land and building assets
  • The minutes, agendas and meeting papers of formal meetings.

As well as the required compliance data listed above, additional information can also be easily added, including:

  • Land & property planning applications
  • RSS feeds from local government
  • Google Maps
  • Parish history
  • Local services
  • Neighbourhood Development Plans
  • Surveys and polls
  • Email newsletters
  • Social media integration

Combining the required compliance data with complementary parish information makes the websites more engaging. In addition, when parishioners visit the site they will find the navigation and drop-down menus are uncluttered, mobile-friendly and easy-to-use.

The cloud service has been designed to take account of the various levels of network coverage in rural areas and can be accessed on PC’s, laptops and mobile devices connected to 3G, 4G or broadband networks. Parish clerks who live in rural areas without broadband can use a laptop connected via the “hotspot” capabilities of a 3G or 4G mobile device to update and upload content onto the cloud service.

Parish clerks can check the 3G and 4G network coverage for their parish using this free crowdsourced geo-location tool.

the digital transformation of parish councils has begun

The service was developed and iterated over the last year by collaborating with parish clerks, parish councillors and local government officers and is currently being rolled out by the Herefordshire Association of Local Councils, where over 50 Parish Councils have already adopted the cloud service. Some “early adopter” parish clerks are using their newly acquired WordPress skills, learnt by using the ABA video elearning library, to turn the Parish Council websites into community hubs.

Take a look at some examples of our Parish Council websites:

Lynda Wilcox, the Chief Executive of Herefordshire Association of Local Councils, said “The Parish Councils in Hereford using the service have already noticed an increase in the number of parishioners attending meetings, more engagement with older parishioners by email and also more younger parishioners turning up at meetings wanting to get involved in local democracy.”

Mark Millmore, ABA Director, said “Our low-cost and easy-to-use, software-as-a-service (SaaS) can be easily rolled out to any of the 8,356 Parish Councils in England and G-Cloud will be an important route for us to reach these government organisations.

Our software-as-a-service (SaaS) business model will enable Parish Councils to comply with the Transparency Code and improve engagement with the local community. It will also allow significant savings to be made from the UK Government £4.7 million grant managed through the National Association of Local Councils (NALC’s) and its 38 independent County Associations to ensure all 8,356 Parish Councils are compliant with the Transparency Code.

Over the last year we have collaborated with stakeholders to iterate, test and design the service. During this period we have also developed templates, workflows, a cloud server infrastructure and can scale our service to meet client requirements.

The ABA pricing matrix for Parish Council websites being offered to NALC and the 38 independent County Associations that administer the 8,356 Parish Councils is a one off fee of:

  • £500 each for 1-10 websites
  • £400 each for 11-30 websites
  • £350 each for 31-50 websites
  • £300 each for 51-99 websites
  • £250 each for 100+ websites

The one-off fee includes all technical set up on a dedicated cloud server at a secure 1&1 UK datacentre, domain name registration, custom email server, plugins, logo, menus and pages, SEO, loading of last 12 months archive Parish Council content, Google Maps integration and access to a customised Parish Council specific video e-learning library.

After the first year, there is a £100 annual fee for each website that covers support, maintenance, updates, backups and access to a Parish Council specific video e-learning library (available online or as CD’s).

Whilst not a requirement of Transparency Code compliance, if requested, we can also provide SSL certificates (HTTPS) and .gov.uk domain registrations for each website, these additional services are charged at cost price.

Our low-cost and easy-to-use cloud service will enable Parish Councils to comply with the Transparency Code and improve engagement with the local community. As such, we are delighted to have been awarded a place on the G9 Agreement, the latest iteration of G-Cloud, and can’t wait to take advantage of the many opportunities that the initiative offers for both suppliers and government bodies. The digital transformation of parish councils has begun.”

Digital transformation is not just for parish councils, soon every citizen in Herefordshire and Gloucestershire will benefit from Fastershire.

Fastershire is a partnership between Herefordshire Council and Gloucestershire County Council to bring faster broadband to the two counties, with funding from central government’s Broadband Delivery UK matched by the local authorities.

Phase 1 of the project, in partnership with BT, will see around 90% of Gloucestershire and Herefordshire having access to fibre broadband, with all premises in the project area being able to access a minimum of 2Mbps.

Phase 2 of the project will extend fibre coverage further across both counties to make ultrafast speeds available to over 6,500 of the most difficult to reach rural homes and businesses.

the digital transformation of parish councils has begun

The ultimate aim is that by 2018 there will be access to fast broadband for all who need it. Fastershire is not just about technology. The project also includes social and digital inclusion activities, and an extensive ‘Business Support’ programme, designed to help small and medium size businesses enhance their digital skills and use fibre broadband to grow their businesses and be more competitive.

To help the small businesses of Herefordshire and Gloucestershire capitalise on this opportunity, ABA can also provide low-cost, easy-to-use and customisable WordPress ecommerce website templates, accessed as a cloud service. An example of the ecommerce website template is being used by Pengethley Farm Shop.

the digital transformation of parish councils has begun

Small businesses in Herefordshire and Gloucestershire using our ecommerce cloud service will also be able to take advantage of the ABA digital transformation platform to drive business and generate revenue.

It is expected that Fastershire will help to boost the local economy by £420m over the next ten years.

Fastershire will revolutionise the way that people of all ages across Herefordshire and Gloucestershire participate in democracy, work, learn and play, and will benefit generations to come.

the digital transformation of parish councils has begun

The “parish council” model for local democracy can also be used as a template for nation-building after war or revolution.

For enquires about the parish council and ecommerce cloud services contact Mark Millmore on 07891108154

For enquires about the digital transformation platform and digital training contact Andy Black on  07881 314570

Additional information:

G-Cloud is a Crown Commercial Service (CCS) initiative to encourage public sector adoption of cloud services by connecting government organisations with providers of all sizes in a secure and open environment. The CCS acts on behalf of the Crown to drive savings for the taxpayer and improve the quality of commercial and procurement activity across both local and central government.

To qualify for inclusion in G9, organisations need to prove that they are a suitable and secure potential partner for government technology projects. They must be prepared to list the capabilities of their products, along with indicative pricing. As a result, G9 provides public sector bodies with an open, secure and transparent digital marketplace in which to search for cloud solutions.

It also provides new business opportunities to businesses that pass the checks required to qualify for G9 status. Crown Commercial Service suppliers are given an opportunity to advertise their services to a wide range of interested public sector bodies in a competitive environment. Since it became available in 2012, UK government organisations have placed billions of pounds’ worth of orders through the service with most orders being won by SME’s.