Facebook: Fooled by Ranking?
April 1, 2022
I sure hope the information in “A Facebook Bug Led to Increased Views of Harmful Content Over Six Months.” The subtitle is interesting too. “The social network touts downranking as a way to thwart problematic content, but what happens when that system breaks?”
The write up explains:
Instead of suppressing posts from repeat misinformation offenders that were reviewed by the company’s network of outside fact-checkers, the News Feed was instead giving the posts distribution, spiking views by as much as 30 percent globally.
Now let’s think about time. The article reports:
In 2018, CEO Mark Zuckerberg explained that downranking fights the impulse people have to inherently engage with “more sensationalist and provocative” content. “Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average — even when they tell us afterwards they don’t like the content,” he wrote in a Facebook post at the time.
Why did this happen?
The answer may be that assumptions about the functionality of online systems must be verified by those who know the mechanisms used. Then the functions must be checked on a periodic business. The practice of slipstreaming changes may introduce malfunctions, which no one catches because no one is rewarded for slowing down the operation.
Based on my work for assorted reports and monographs, there are several other causes of a disconnect between what a high technology outfits and its systems actually do. Let me highlight what I call the Big Three:
- Explaining something that might be is different from delivering the reality of the system. Management wants to believe that code works, and not too many people want to be the person who says, “Yeah, this is what the system is actually doing?” Institutional momentum can crush certain types of behavior.
- The dependencies within complex software systems are not understood, particularly by recently hired outside experts, new hires, or — heaven help us — interns who are told to do X without meaningful checks, reviews, and fixes.
- An organization’s implicit policies keep feedback contained so the revenue continues to flow. Who gets promoted for screwing up ad sales? As a result, news releases, public statements, and sworn testimony operates in an adjacent but separate conceptual space from the mechanisms that generate live systems.
It has been my experience that when major problems are pointed out, reactions range from “What do you mean?” to a chuckled comment, “That’s just the way software works.”
What intrigues me is the larger question, “Is the revelation that Facebook smart software does not work as the company believed it did, the baseline for the company’s systems. On the other hand, the information could be an ill considered April Fool’s joke.
My hunch is that the article is not humor. Much of Facebook’s and Silicon Valley behavior does not tickly my funny bone. My prediction is that some US regulators and possibly Margrethe Vestager will take this information under advisement.
Stephen E Arnold, April 1, 2022
The Artificial Intelligence Balloon: Leaking a Bit, Eh?
March 30, 2022
I noted “Enterprise AI Needs to Deliver Real Value As Adoption Slows.” I am not able to define “real value,” but let’s not quibble. The write up reports that a survey from a publisher / conference organizer / Silicon Valley luminary has identified what might be a leaking hyperbole balloon.
I noted:
The latest annual AI Adoption in the Enterprise survey from O’Reilly finds that over the last two years the number of organizations with AI applications in production has remained steady at 26 percent. However, many enterprises still lack AI governance. Among respondents with AI products in production, the number of those whose organizations have a governance plan in place to oversee how projects are created, measured, and observed (49 percent) is roughly the same as those that don’t (51 percent).
But AI is the next big thing. Innovation will soar. Employees will be wallowing in extra time to do “human things.” Money will flow.
These statements are indeed true for Amazon, Facebook, Google, and a handful of other outfits. But for Bob’s Trucking Company or small accounting firm in the Rust Belt, well, not so much it seems.
The reason may be nestled in this comment in the article:
For years, AI has been the focus of the technology world,” says Mike Loukides, vice president of content strategy at O’Reilly and the report’s author. “Now that the hype has died down, it’s time for AI to prove that it can deliver real value, whether that’s cost savings, increased productivity for businesses, or building applications that can generate real value to human lives. This will no doubt require practitioners to develop better ways to collaborate between AI systems and humans, and more sophisticated methods for training AI models that can get around the biases and stereotypes that plague human decision-making.”
What’s the fix? Remediation of algorithmic biases, a shift to NFT innovation, or online gambling?
Those are questions for the little people. The largely unregulated giants are happy to do the smart software thing. Big value is well understood by these firms’ management teams.
Stephen E Arnold, March 30, 2022
Quick Question: Fabricated or Synthetic Data?
March 24, 2022
I read “Evidence of Fabricated Data in a Vitamin C trial by Paul E Marik et al in CHEST.” Non-reproducibility appears to be a function of modern statistical methods. Okay. The angle in this article is:
… within about 5 minutes of reading the study it became overwhelmingly clear that it is indeed research fraud and the data is (sic) fabricated.
Synthetic data are fabricated. Some big outfits are into using machine generated data sort of related to real life data to save money.
Here’s my question:
What’s the difference between fabricated data and synthetic data?
I am leaning to “not much.” One might argues that the motives in a research paper is tenure. In other applications, maybe the goal is just efficiency and its close friend money. My larger concern is that embedding fabricated and / or synthetic data into applications may lead to some unexpected consequences. Hey, how about that targeting of a kinetic? Screwy ad targeting is one thing, but less benign situations can be easily conceptualized; for example, “We’re sorry. That smart car self driving module did not detect your mom in the crosswalk.”
Stephen E Arnold, March 24, 2022
Technology Conferences: What Does Sponsorship Money Buy?
March 14, 2022
I have attended a number of conferences in my 50 year work career. Here’s what I learned:
- Giving a conference organizer money can provide access to the attendee list with phone number and email addresses
- Paying for a cocktail, breakfast, or some other gathering within the conference can include a speaking slot
- Supporting the conference with a payment can result in one of those logo-bedecked bags provided to each and every attendee whether the attendee wants the useless pouch or not
- Offering cash may allow a special information channel; for example, a contributor being able to provide a video commercial into each attendee’s hotel room. (Yep, I remember, the Chemical Abstracts’ video in London a decade ago)
- Coughing up money results in slots on the program and these talks are not on the last day of the conference at the tail end of the program day. Nope. These slots are keynotes or hour long masterpieces of PowerPointery.
Now you get the idea.
“The Tech Industry Controls CS Conference Funding. What Are the Dangers?” explains a more interesting and somewhat more conceptual approach to paying conference organizers big money or small money over, under, or on the table.
The write up points out that a role in selecting the topics and who can talk. That’s the nifty part. The attendee perceives the conference lectures as neutral. Often a panel of “experts” reviews the abstracts and interacts with the speakers. Some conferences offer helpful guidelines. Do conference funding sources manipulate the knobs and dials of the program itself? Yep, conference organizers love to play ball, have favorites (Isn’t Google special?), and their own industry biases (How about quantum computing for intelligence professionals?)
I noted this statement in the cited article:
Relying on large companies and the resources they control can create significant limitations for the kinds of CS research that are proposed, funded and published. The tech industry plays a large hand in deciding what is and isn’t worthy of examination, or how issues are framed. For instance, a tech company might have a very different definition of privacy from that which is used by consumer rights advocates. But if the company is determining the parameters for the kinds of research it wishes to sponsor, it can choose to fund proposals that align with or uphold its own interpretation.
Let’s imagine a hypothetical conference about smart software. The funding entities are part of the Stanford AI Lab persuasion. What happens when Dr. Timnit Gebru and her fellow travelers propose a paper? In an objective, academic, Ivory Tower world, the papers are picked based on some arbitrary set of PhD-infused criteria? What happens if an IBM- or Google-type outfit funds the conference? Forget that Ivory Tower handwaving. The idea will be to advance the agenda. Snorkel, cognitive computing, whatever.
What happens when a history major with an MBA attends the conference looking for something in which to invest his financial firm’s hard earned assets? Is this MBA able to differentiate the goose feathers from the giblets? In my opinion, the MBA will select from the knowledge buffet. Think about a boxed conference lunch sponsored by the keynote speaker’s company? Do you want cheese with your chicken or do you want cheese? See there is a choice. In reality, one takes what is served.
Even “research” has been converted into information warfare. Objective and exciting, right?
Is the conference organizer complicit? Yep.
Stephen E Arnold March 14, 2022
Amazon: Does the Online Bookstore Sell Petards?
March 14, 2022
What happens when an Amazon wizard says something that allows a real news outfit to write:
In 2020, Jeff Bezos, then the company’s CEO, told the committee Amazon doesn’t allow staff to use data from individual sellers to make competing products, but couldn’t guarantee “that policy has never been violated.” Executives also said in testimony that the company doesn’t use seller data to copy products and then promote its versions in search results, despite reports to the contrary. Source: “DOJ Asked to Investigate Amazon over Possible Obstruction of Congress”?
What’s a petard? A search of Amazon reveals that it thinks it is a way to find a book in French which seems like to inflame Tennessee local school board officials. See “Peanut Butter: The Journal de Molly Fredickson”.
The petard of which I am thinking is “hoist by your own petard.” It means, according to the Free Dictionary:
Injured, ruined, or defeated by one’s own action, device, or plot that was intended to harm another; having fallen victim to one’s own trap or schemes. (“Hoist” in this instance is the past participle of the archaic verb “hoist,” meaning to be raised or lifted up. A “petard” was a bell-shaped explosive used to breach walls, doors, and so on.)
Saying one thing under oath and having elected officials learn facts that suggest otherwise is not a credibility booster.
Would senior wizards for the online bookstore dissemble?
Yep, just like some other executives when they say, “Senator, thank you for that question. I don’t know, but I will get back to you.”
Stephen E Arnold, March 14, 2022
IBM: Big Blue May Have Some Digital Re-Engineering to Explain
March 4, 2022
Yo, I am a dinobaby, and I am proud of that fact. You want proof. I know what a rotary dial phone is. I know how to use a facsimile machine. Heck, I can still crank out a mimeograph document. I even know how to get a drink from a terracotta jar in rural Brazil. (Love those chemicals and that wonky purple-blue color which reminds me of Big Blue.)
Several years ago, I read a blog by some IBM people which documented the harvesting of old workers. That blog disappeared, of course. It named managers, disclosed snippets of email, and did a fine job to making clear that oldsters had one function. The idea was that before finding their future elsewhere, the old employees would train their replacements. This is a variation on copying data from a DASD to a zippy new storage device, just with humanoids, not silicon.
I have been following the word dinobaby. I entered it into my log of jazzy new terms coined by millennials and GenXers. I put dinobaby between grosso modo memetic learning and vibe shift. This is not alphabetical I know, but I like the rhythm of the words when offered in a dinner conversation about technology.
The word appeared in “IBM Executives Planned to Rid the Company of Older, Dinobaby Employees and Replace Them with Millennials, Lawsuit Alleges.” I thought the lawsuit was an interesting opportunity for legal eagles to generate some money.
Then I read the February 26, 2022, story “IBM Cannot Kill This Age-Discrimination Lawsuit Linked to CEO.” Despite Covid, financial turmoil, and the unfortunate events in Eastern Europe:
The judge overseeing an age-discrimination case against IBM has denied the IT giant’s motion to dismiss the lawsuit, citing evidence supporting plaintiff Eugen Schenfeld’s claim that CEO Arvind Krishna, then director of IBM research, made the decision to fire him.
The write up includes a link to a legal document and some snazzy code names; for example, Project Concord, Project Baccarat, and Project Ruby. It appears that each project was intended to get the big, noisy, weird dinobabies out of IBM’s life.
Not happening yet.
The write up asserts that there are more than 10,000 mainframe capable dinobabies vaporized by the “projects” implemented during the scintillating tenure of Ginni Rometty, former president and CEO of Big Blue. (Did you know that Ms. Rometty worked at General Motors, an esteemed automobile company which developed the Chevrolet Bolt, a model which caught on fire? The owner was not Ginni Rometty. The burning GM vehicle was owned by an elected official in Vermont.)
IBM may escape punishment for its alleged conversion of humanoids into dinobabies. But it will be interesting to follow the legal machinations which now seeks to transform dinobabies into hamsters and gerbils with mainframe and other esoteric skills.
Plus the lawyers can consult IBM Watson for inputs!
Stephen E Arnold,March 4, 2022
Blue Chip Outfits: Clumsy Cheaters?
March 1, 2022
I read that one of the big blue chip accounting / consulting firms revealed the jib of its ethical sails. The information appears in “PwC Fined Over Exam Cheating Involving 1,100 of Its Auditors.” [You will have to pay to read this interesting “real” news report.] I learned from the odd orange newspaper:
PwC Canada has been fined more than $900,000 by Canadian and US accounting regulators over exam cheating involving 1,100 of its auditors. The watchdogs found that the Big Four firm failed to spot that staff were sharing answers in exams between 2016 and 2020 because of shortcomings in its internal standards and test supervision.
What does this suggest about the notion of “quality,” “oversight,” and “integrity” when these words are applied to a blue chip outfit like PwC? PwC says on its About Us page:
Our values define the expectations we have for working with each other and our clients. Although we come from different backgrounds and cultures across the firm, our values are what we have in common. They capture our shared aspirations and expectations, and guide how we make decisions and treat others—they’re what makes us, us.
Does this mean this is the logic used at PwC: We cheat and obviously are likely to perform just about any action because of “shortcomings” in standards? Is the logic, “Well, McKinsey did the opioid work, so we help 1,100 whiz kids ace an examination.” Is this the lesser of two possible inappropriate blue chip thought processes?
Keep in mind that when PwC “discovered” the cheating, the company “immediately opened an internal investigation.” So it is now 2022 and the question, “How long has PwC been cheating?” remains unanswered.
Stephen E Arnold, March 1, 2022
Stephen E Arnold,
Bloomberg and the Japan Times on the Plight of Man: A TikTok Video to Come?
February 18, 2022
I read “‘Sapiens’? Humans Aren’t Wise, Just Too Smart for Our Own Good.” Bloomberg is the firm providing the trading system to many of Wall Street’s brightest minds. Japan is the country which has created the management actions of Toshiba and the Toyota subscription to remote starting. What I noted in the write up was this passage:
The late B.K.S. Iyengar, a yogi, once said that intelligence, like money, is a good servant but a bad master. Even science has explored why and how smart people can be so foolish. In a nutshell, it comes down to a cocktail of egocentrism, narcissism and arrogance that overpowers everything else — or what the ancient Greeks called hubris.
From the assertion that spy chips were on motherboards to ways to make life interesting for automobile owners, it is interesting to think about hubris. And the yogi. Was he talking about those who think technology solves mankind’s problems?
Stephen E Arnold, February 18, 2022
What Does Go Bro Suggest for Software?
January 26, 2022
Traditionally IT workers and sports fans are traditionally represented by the stereotypical portrayals of nerds and jocks. The jocks are very buff, popular individuals while nerd are smart, socially awkward people. Since the advancement of computer science, the Internet, and videogames, the stereotypes have eroded. ReadWrite explains how the jock and nerd chasm is smaller in, “Why Software Product Development Is The Ultimate Team Sport.”
Teamwork is essential to a successful IT department and/or company. It is extremely important for software product development. Contrary to popular conceptions, programmers and their teams do not isolate themselves. Instead Programmers are part of a dynamic team effort comparable to how professional sports teams are managed.
Software product development teams should be carefully built and be allowed to discover their own work rapport:
“To a large extent, these teams should be able to work free of bureaucracy and politics, focusing entirely on the product at hand. To do that, the other stakeholders need to collaborate to ensure teams have the guidance, resources, and time they need to work with a high degree of independence.
Just as important as assembling the constituent parts and letting them operate autonomously is finding the right fit between them. Teams obviously need to have the right combination of skills to turn the software product development process into a functional, finished product. But they also need the right mix of personalities, clear roles for everyone involved, a cohesive leadership structure, and effective communication channels.”
Similar to sports teams, software development groups need to adapt to uncut, overcome persistent obstacles, create meaningful innovation, overcoming persistent obstacles, being able to repeat success. These are all situations that not only sports and software teams handle, but also all teams in all industries.
Whitney Grace, January 26, 2022
DarkCyber for January 18, 2022 Now Available : An Interview with Dr. Donna M. Ingram
January 18, 2022
The fourth series of DarkCyber videos kicks off with an interview. You can view the program on YouTube at this link. Dr. Donna M. Ingram is the author of a new book titled “Help Me Learn Statistics.” The book is available on the Apple ebook store and features interactive solutions to the problems used to reinforce important concepts explained in the text. In the interview, Dr. Ingram talks about sampling, synthetic data, and a method to reduce the errors which can creep into certain analyses. Dr. Ingram’s clients include financial institutions, manufacturing companies, legal subrogration customers, and specialized software companies.
Kenny Toth, January 18, 2022