23andMe: Those Users and Their Passwords!

December 5, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Silicon Valley and health are match fabricated in heaven. Not long ago, I learned about the estimable management of Theranos. Now I find out that “23andMe confirms hackers stole ancestry data on 6.9 million users.” If one follows the logic of some Silicon Valley outfits, the data loss is the fault of the users.

image

“We have the capability to provide the health data and bioinformation from our secure facility. We have designed our approach to emulate the protocols implemented by Jack Benny and his vault in his home in Beverly Hills,” says the enthusiastic marketing professional from a Silicon Valley success story. Thanks, MSFT Copilot. Not exactly Jack Benny, Ed, and the foghorn, but I have learned to live with “good enough.”

According to the peripatetic Lorenzo Franceschi-Bicchierai:

In disclosing the incident in October, 23andMe said the data breach was caused by customers reusing passwords, which allowed hackers to brute-force the victims’ accounts by using publicly known passwords released in other companies’ data breaches.

Users!

What’s more interesting is that 23andMe provided estimates of the number of customers (users) whose data somehow magically flowed from the firm into the hands of bad actors. In fact, the numbers, when added up, totaled almost seven million users, not the original estimate of 14,000 23andMe customers.

I find the leak estimate inflation interesting for three reasons:

  1. Smart people in Silicon Valley appear to struggle with simple concepts like adding and subtracting numbers. This gap in one’s education becomes notable when the discrepancy is off by millions. I think “close enough for horse shoes” is a concept which is wearing out my patience. The difference between 14,000 and almost 17 million is not horse shoe scoring.
  2. The concept of “security” continues to suffer some set backs. “Security,” one may ask?
  3. The intentional dribbling of information reflects another facet of what I call high school science club management methods. The logic in the case of 23andMe in my opinion is, “Maybe no one will notice?”

Net net: Time for some regulation, perhaps? Oh, right, it’s the users’ responsibility.

Stephen E Arnold, December 5, 2023 

Complex Humans and Complex Subjects: A Recipe for Confusion

November 22, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Disinformation is commonly painted as a powerful force able to manipulate the public like so many marionettes. However, according to Techdirt’s Mike Masnick, “Human Beings Are Not Puppets, and We Should Probably Stop Acting Like They Are.” The post refers to in in-depth Harper’s Magazine piece written by Joseph Bernstein in 2021. That article states there is little evidence to support the idea that disinformation drives people blindly in certain directions. However, social media platforms gain ad dollars by perpetuating that myth. Masnick points out:

“Think about it: if the story is that a post on social media can turn a thinking human being into a slobbering, controllable, puppet, just think how easy it will be to convince people to buy your widget jammy.”

Indeed. Recent (ironic) controversy around allegedly falsified data about honesty in the field of behavioral economics reminded Masnick of Berstein’s article. He considers:

“The whole field seems based on the same basic idea that was at the heart of what Bernstein found about disinformation: it’s all based on this idea that people are extremely malleable, and easily influenced by outside forces. But it’s just not clear that’s true.”

So what is happening when people encounter disinformation? Inconveniently, it is more complicated than many would have us believe. And it involves our old acquaintance, confirmation bias. The write-up continues:

“Disinformation remains a real issue — it exists — but, as we’ve seen over and over again elsewhere, the issue is often less about disinformation turning people into zombies, but rather one of confirmation bias. People who want to believe it search it out. It may confirm their priors (and those priors may be false), but that’s a different issue than the fully puppetized human being often presented as the ‘victim’ of disinformation. As in the field of behavioral economics, when we assume too much power in the disinformation … we get really bad outcomes. We believe things (and people) are both more and less powerful than they really are. Indeed, it’s kind of elitist. It’s basically saying that the elite at the top can make little minor changes that somehow leads the sheep puppets of people to do what they want.”

Rather, we are reminded, each person comes with their own complex motivations and beliefs. This makes the search for a solution more complicated. But facing the truth may take us away from the proverbial lamppost and toward better understanding.

Cynthia Murrell, November 22, 2023

Reading. Who Needs It?

September 19, 2023

Book banning aka media censorship is an act as old as human intellect. As technology advances so do the strategies and tools available to assist in book banning. Engadget shares the unfortunate story about how, “An Iowa School District Is Using AI To Ban Books.” Mason City, Iowa’s school board is leveraging AI technology to generate lists of books to potentially ban from the district’s libraries in the 2023-24 school year.

Governor Kim Reynolds signed Senate File 496 into law after it passed the Republican-controlled state legislature. Senate File 496 changes the state’s curriculum and it includes verbiage that addresses what books are allowed in schools. The books must be “age appropriate” and be without “descriptions or visual depictions of a sex act.”

“Inappropriate” titles have snuck past censors for years and Iowa’s school board discovered it is not so easy to peruse every school’s book collection. That is where the school board turned to an AI algorithm to undertake the task:

“As such, the Mason City School District is bringing in AI to parse suspect texts for banned ideas and descriptions since there are simply too many titles for human reviewers to cover on their own. Per the district, a “master list” is first cobbled together from “several sources” based on whether there were previous complaints of sexual content. Books from that list are then scanned by “AI software” which tells the state censors whether or not there actually is a depiction of sex in the book.”

The AI algorithm has so far listed nineteen titles to potentially ban. These include banned veteran titles such as The Color Purple, I Know Why the Caged Bird Sings, and The Handmaid’s Tale and “newer” titles compared to the formers: Gossip Girl, Feed, and A Court of Mist and Fury.

While these titles are not appropriate for elementary schools, questionable for middle schools, and arguably age-appropriate for high schools, book banning is not good. Parents, teachers, librarians, and other leaders must work together to determine what is best for students. Books also have age ratings on them like videogames, movies, and TV shows. These titles are tame compared to what kids can access online and on TV.

Whitney Grace, September 19, 2023

The Best Books: Banned Ones, Of Course

September 14, 2023

Every few years the United States deals with a morality crisis that surrounds inappropriate items. In the 1980s-1990s, it was the satanic panic that demoralized role playing games, heavy metal and rap music, videogames, and everything tied to the horror/supernatural genre. This resulted in the banning of multiple mediums, including books. Teachers, parents, and other adults are worried about kids’ access to so-called pornographic books, specifically books that deal with LGBTIA+ topics.

The definition of “p*rnography” differs with every individual, but the majority agree that anything describing sex or related topics, nudity, transgenderism, and homosexuality fulfill that definition. While some of the questionable books do depict sex and/or sexual acts, protestors are applying a blanket term to every book they deem inappropriate. Their worst justification for book banning is that they do not need to read a book to know it is “p*rnographic.” [Just avoiding the smart banned word list. Editor]

Thankfully there are states that stand by the First Amendment: “Illinois Passes Bill To Stop Book-Bannings,” says Lit Hub. The Illinois state legislature passed House Bill 2789 that states all schools and libraries that remove books from their collections will not receive state grant money. Anti-advocates complain that people’s taxes are paying for books that they do not like. Radical political groups, including the Proud Boys, have supported book banning.

Other books topics deemed inappropriate include racial themes, death, health and wellbeing, violence, suicide, physical abuse, abortion, sexual abuse, and teen pregnancy. In 1982, Island Trees School District vs. Pico was a book banning case that went before the Supreme Court. The plaintiff was student Steven Pico from Long Island, who challenged his school district about books they claimed were “just plain filthy.” The outcome was:

“Pico won, and Justice William Brennan wrote in a majority decision that “Local school boards may not remove books from school libraries simply because they dislike the ideas contained in those books and seek by their removal to ‘prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion.’”

We are in a new era characterized by the Internet and changing societal values. The threat to freedom of speech and access to books, however, remains the same.

Whitney Grace, September 14, 2023

Calls for AI Pause Futile At this Late Date

August 29, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Well, the nuclear sub has left the base. A group of technology experts recently called for a 6-month pause on AI rollouts in order to avoid the very “loss of control of our civilization” to algorithms. That might be a good idea—if it had a snowball’s chance of happening. As it stands, observes ComputerWorld‘s Rob Enderle, “Pausing AI Development Is a Foolish Idea.” We think foolish is not a sufficiently strong word. Perhaps regulation could have been established before the proverbial horse left the barn, but by now there are more than 500 AI startups according to Jason Calacanis, noted entrepreneur and promoter.

8 27 sdad sailor

A sad sailor watches the submarine to which he was assigned leave the dock without him. Thanks, MidJourney. No messages from Mother MJ on this image.

Enderle opines as a premier pundit:

“Once a technology takes off, it’s impossible to hold back, largely because there’s no strong central authority with the power to institute a global pause — and no enforcement entity to ensure the pause directive is followed. The right approach would be to create such an authority beforehand, so there’s some way to assure the intended outcome. I tend to agree with former Microsoft CEO Bill Gates that the focus should be on assuring AI reliability, not trying to pause everything. … There simply is no global mechanism to enforce a pause in any technological advance that has already reached the market.”

We are reminded that even development on clones, which is illegal in most of the world, continues apace. The only thing bans seem to have accomplished there is to obliterate transparency around cloning projects. There is simply no way to rein in all the world’s scientists. Not yet. Enderle offers a grain of hope on artificial intelligence, however. He notes it is not too late to do for general-purpose AI what we failed to do for generative AI:

“General AI is believed to be more than a decade in the future, giving us time to devise a solution that’s likely closer to a regulatory and oversight body than a pause. In fact, what should have been proposed in that open letter was the creation of just such a body. Regardless of any pause, the need is to ensure that AI won’t be harmful, making oversight and enforcement paramount. Given that AI is being used in weapons, what countries would allow adequate third-party oversight? The answer is likely none — at least until the related threat rivals that of nuclear weapons.”

So we have that to look forward to. And clones, apparently. The write-up points to initiatives already in the works to protect against “hostile” AI. Perhaps they will even be effective.

Cynthia Murrell, August 16, 2023

The Secret Cultural Erosion Of Public Libraries: Who Knew?

August 25, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

It appears the biggest problem public and school libraries are dealing with are demands to ban controversial gay and trans titles. While some libraries are facing closures or complete withdrawals of funding, they mostly appear to be in decent standing. Karawynn Long unfortunately discovered that is not the case. She spills the printer’s ink in her Substack post: “The Coming [Cultural Erosion] Of Public Libraries” with the cleverly deplorable subtitle “global investment vampires have positioned themselves to suck our libraries dry.”

Before she details how a greedy corporation is bleeding libraries like a leech, Long explains how there is a looming cultural erosion brought on by capitalism. A capitalist economic system is not inherently evil but bad actors exploit it. Long uses a more colorful word to explain libraries’ cultural erosion. In essence the colorful word means when something good deteriorates into crap.

A great example is when corporations use a platform, i.e. Facebook, Twitter, and Amazon, to pit buyers and sellers against each other while the top runs away with heaps of cash.

This ties back to public libraries because they use a digital library app called OverDrive. Library patrons use OverDrive to access copies of digital books, videos, audiobooks, magazines, and other media. It is the only app available to public libraries to manage digital media. Patrons could access OverDrive via an app call Libby or a Web site portal. In May 2023, the Web site portal deleted a feature that allowed patrons to recommend new titles to their libraries.

OverDrive wants to force users to adopt their Libby app. The Libby app has a “notify me” option that alerts users when their library acquires an item. OverDrive’s overlords also want to collect sellable user data, like other companies. Among other details, OverDrive is owned by the global investment firm KKR, Kohlberg Kravis Roberts.

KKR’s goal is one of the vilest investment capital companies, dubbed a “vampire capitalist” company, and it has a fanged hold on the US’s public libraries. OverDrive flaunts its B corporation status but that does not mask the villain lurking behind the curtain:

“ As one library industry publication warned in advance of the sale to KKR, ‘This time, the acquisition of OverDrive is a ‘financial investment,’ in which the buyer, usually a private equity firm or other financial sponsor, expects to increase the value of the company over the short term, typically five to seven years.’ We are now three years into that five-to-seven, making it likely that KKR’s timeframe for completing maximum profit extraction is two to four more years. Typically this is accomplished by levying enormous annual “management fees” on the purchased company, while also forcing it (through Board of Director mandates) to make changes to its operations that will result in short-term profit gains regardless of long-term instability. When they believe the short-term gains are maxed out, the investment firm sells off the company again, leaving it with a giant pile of unsustainable debt from the leveraged buyout and often sending it into bankruptcy.”

OverDrive likely plans to sell user data then bleed the public libraries dry until local and federal governments shout, “Uncle!” Among book bans and rising inflation, public libraries will see a reckoning with their budgets before 2030.

Whitney Grace, August 25, 2023

India Where Regulators Actually Try or Seem to Try

August 22, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Data Act Will Make Digital Companies Handle Info under Legal Obligation.” The article reports that India’s regulators are beavering away in an attempt to construct a dam to stop certain flows of data. The write up states:

Union Minister of State for Electronics and Information Technology Rajeev Chandrasekhar on Thursday [August 17, 2023] said the Digital Personal Data Protection Act (DPDP Act) passed by Parliament recently will make digital companies handle the data of Indian citizens under absolute legal obligation.

What about certain high-technology companies operating with somewhat flexible methods? The article uses the phrase “punitive consequences of high penalty and even blocking them from operating in India.”

8 18 eagles

US companies’ legal eagles take off. Destination? India. MidJourney captures 1950s grade school textbook art quite well.

This passage caught my attention because nothing quite like it has progressed in the US:

The DPDP [Digital Personal Data Protection] Bill is aimed at giving Indian citizens a right to have his or her data protected and casts obligations on all companies, all platforms be it foreign or Indian, small or big, to ensure that the personal data of Indian citizens is handled with absolute (legal) obligation…

Will this proposed bill become law? Will certain US high-technology companies comply? I am not sure of the answer, but I have a hunch that a dust up may be coming.

Stephen E Arnold, August 22, 2023

Thought Leader Thinking: AI Both Good and Bad. Now That Is an Analysis of Note

August 17, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read what I consider a “thought piece.” This type of essay discusses a topic and attempts to place it in a context of significance. The “context” is important. A blue chip consulting firm may draft a thought piece about forever chemicals. Another expert can draft a thought piece about these chemicals in order to support the companies producing them. When thought pieces collide, there is a possible conference opportunity, definitely some consulting work to be had, and today maybe a ponderous online webinar. (Ugh.)

8 17 don quixote

A modern Don Quixote and thought leader essay writer lines up a windmill and charges. As the bold 2023 Don shouts, “Vile and evil windmill, you pretend to grind grain but you are a mechanical monster destroying the fair land. Yield, I say.” The mechanical marvel just keeps on turning and the modern Don is ignored until a blade of the windmill knocks the knight to the ground.” Thanks, MidJourney. It only took three tries to get close to what I described. Outstanding evidence of degradation of function.

The AI Power Paradox: Can States Learn to Govern Artificial Intelligence—Before It’s Too Late?” considers the “problem” of smart software. My recollection is that artificial intelligence and machine learning have been around for decades. I have a vivid recollection of a person named Marvin Weinberger I believe. This gentleman made an impassioned statement at an Information Industry Association meeting about the need for those in attendance to amp up their work with smart software. The year, as I recall, was 1981.

The thought piece does not dwell on the long history of smart software. The interest is in what the thought piece presents as it context; that is:

And generative AI is only the tip of the iceberg. Its arrival marks a Big Bang moment, the beginning of a world-changing technological revolution that will remake politics, economies, and societies.

The excitement about smart software is sufficiently robust to magnetize those who write thought pieces. Is the outlook happy or sad? You judge. The essay asserts:

In May 2023, the G-7 launched the “Hiroshima AI process,” a forum devoted to harmonizing AI governance. In June, the European Parliament passed a draft of the EU’s AI Act, the first comprehensive attempt by the European Union to erect safeguards around the AI industry. And in July, UN Secretary-General Antonio Guterres called for the establishment of a global AI regulatory watchdog.

I like the reference to Hiroshima.

The thought piece points out that AI is “different.”

It does not just pose policy challenges; its hyper-evolutionary nature also makes solving those challenges progressively harder. That is the AI power paradox. The pace of progress is staggering.

The thought piece points out that AI or any other technology is “dual use”; that is, one can make a smart microwave or one can make a smart army of robots.

Where is the essay heading? Let’s try to get a hint. Consider this passage:

The overarching goal of any global AI regulatory architecture should be to identify and mitigate risks to global stability without choking off AI innovation and the opportunities that flow from it.

From my point of view, we have a thought piece which recycles a problem similar to squaring the circle.

The fix, according to the thought piece, is to create a “minimum of three AI governance regimes, each with different mandates, levers, and participants.

To sum up, we have consulting opportunities, we have webinars, and we have global regulatory “entities.” How will that work out? Have you tried to get someone in a government agency, a non-governmental organization, or federation of conflicting interests to answer a direct question?

While one waits for the smart customer service system to provide an answer, the decades old technology will zip along leaving thought piece ideas in the dust. Talk global; fail local.

Stephen E Arnold, August 17, 2023

AI and Non-State Actors

June 16, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

AI Weapons Need a Safe Back Door for Human Control” contains a couple of interesting statements.

The first is a quote from Hugh Durrant-Whyte, director of the Centre for Translational Data Science at the University of Sydney. He allegedly said:

China is investing arguably twice as much as everyone else put together. We need to recognize that it genuinely has gone to town. If you look at the payments, if you look at the number of publications, if you look at the companies that are involved, it is quite significant. And yet, it’s important to point out that the US is still dominant in this area.

For me, the important point is the investment gap. Perhaps the US should be more aggressive in its identifying and funding promising smart software companies?

The second statement which caught my attention was:

James Black, assistant director of defense and security research group RAND Europe, warned that non-state actors could lead in the proliferation of AI-enhanced weapons systems. “A lot of stuff is very much going to be difficult to control from a non-proliferation perspective, due to its inherent software-based nature. A lot of our export controls and non-proliferation regimes that exist are very much focused on old-school traditional hardware…

Several observations:

  1. Smart software ups the ante in modern warfare, intelligence, and law enforcement activities
  2. The smart software technology has been released into the wild. As a result, bad actors have access to advanced tools
  3. The investment gap is important but the need for skilled smart software engineers, mathematicians, and support personnel is critical in the US. University research departments are, in my opinion, less and less productive. The concentration of research in the hands of a few large publicly traded companies suggests that military, intelligence, and law enforcement priorities will be ignored.

Net net: China, personnel, and institution biases require attention by senior officials. These issues are not fooling around with Twitter scale. More is at stake. Urgent action is needed, which may be uncomfortable for fans of TikTok and expensive dinners in Washington, DC.

Stephen E Arnold, June 16, 2023

AI Legislation: Can the US Regulate What It Does Understand Like a Dull Normal Student?

April 20, 2023

I read an essay by publishing and technology luminary Tim O’Reilly. If you don’t know the individual, you may recognize the distinctive art used on many of his books. Here’s what I call the parrot book’s cover:

image

You can get a copy at this link.

The essay to which I referred in the first sentence of this post is “You Can’t Regulate What You Don’t Understand.” The subtitle of the write up is “Or, Why AI Regulations Should Begin with Mandated Disclosures.” The idea is an interesting one.

Here’s a passage I found worth circling:

But if we are to create GAAP for AI, there is a lesson to be learned from the evolution of GAAP itself. The systems of accounting that we take for granted today and use to hold companies accountable were originally developed by medieval merchants for their own use. They were not imposed from without, but were adopted because they allowed merchants to track and manage their own trading ventures. They are universally used by businesses today for the same reason.

The idea is that those without first hand knowledge of something cannot make effective regulations.

The essay makes it clear that government regulators may be better off:

formalizing and requiring detailed disclosure about the measurement and control methods already used by those developing and operating advanced AI systems. [Emphasis in the original.]

The essay states:

Companies creating advanced AI should work together to formulate a comprehensive set of operating metrics that can be reported regularly and consistently to regulators and the public, as well as a process for updating those metrics as new best practices emerge.

The conclusion is warranted by the arguments offered in the essay:

We shouldn’t wait to regulate these systems until they have run amok. But nor should regulators overreact to AI alarmism in the press. Regulations should first focus on disclosure of current monitoring and best practices. In that way, companies, regulators, and guardians of the public interest can learn together how these systems work, how best they can be managed, and what the systemic risks really might be.

My thought is that it may be useful to look at what generalities and self-regulation deliver in real life. As examples, I would point out:

  1. The report “Independent Oversight of the Auditing Professionals: Lessons from US History.” To keep it short and sweet: Self regulation has failed. I will leave you to work through the somewhat academic argument. I have burrowed through the document and largely agree with the conclusion.
  2. The US Securities & Exchange Commission’s decision to accept $1.1 billion in penalties as a result of 16 Wall Street firms’ failure to comply with record keeping requirements.
  3. The hollowness of the points set forth in “The Role of Self-Regulation in the Cryptocurrency Industry: Where Do We Go from Here?” in the wake of the Sam Bankman Fried FTX problem.
  4. The MBA-infused “ethical compass” of outfits operating with a McKinsey-type of pivot point?

My view is that the potential payoff from pushing forward with smart software is sufficient incentive to create a Wild West, anything-goes environment. Those companies with the most to gain and the resources to win at any cost can overwhelm US government professionals with flights of legal eagles.

With innovations in smart software arriving quickly, possibly as quickly as new Web pages in the early days of the Internet, firms that don’t move quickly, act expediently, and push toward autonomous artificial intelligence will be unable to catch up with firms who move with alacrity.

Net net: No regulation, imposed or self-generated, will alter the rocket launch of news services. The US economy is not set up to encourage snail-speed innovation. The objective is met by generating money. Money, not guard rails, common sense, or actions which harm a company’s self interest, makes the system work… for some. Losers are the exhaust from an economic machine. One doesn’t drive a Model T Ford. Today those who can drive a Tesla Plaid or McLaren. The “pet” is a French bulldog, not a parrot.

Stephen E Arnold, April 20, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta