Smart Software Wizard: Preparation for a Future As a Guru

September 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “I Hope I’m Wrong: the Co-Founder of DeepMind on How AI Threatens to Reshape Life As We Know It.” The article appears to be an interview with one of the founders of the Google DeepMind outfit. There are numerous somewhat astounding quotes in the write up. To enjoy the humble bragging, the “gee whiz, trouble ahead” deflection, and the absolutism of the Google way — read the edited interview yourself. (You will have to click through the increasingly strident appeals for cash from the Guardian newspaper. I find them amusing since the “real news” business decided itself into the pickle many “real news” outfits find themselves.) The “interview” is a book review. Just scroll to the end of the lengthy PR piece about “The Coming Wave.” If there were old fashioned bookstores, you might be able to view the wizard and buy a signed copy. But another high-tech outfit fixed up the bookstore business so you will have to Google it. Heck, Google everything.

9 2 dev thinking

A serious looking AI expert ponders what to do with smart software. It looks to me as if the wizard is contemplating cashing in, becoming famous, and buying a super car. Will he volunteer for the condo association’s board of directors? Thanks, MidJourney. No Mother MJ hassling me for this highly original AI art.

Back to the write up which seems to presage doom from smart software.

Here’s a statement I found interesting:

“I think that what we haven’t really come to grips with is the impact of … family. Because no matter how rich or poor you are, or which ethnic background you come from, or what your gender is, a kind and supportive family is a huge turbo charge,” he says. “And I think we’re at a moment with the development of AI where we have ways to provide support, encouragement, affirmation, coaching and advice. We’ve basically taken emotional intelligence and distilled it. And I think that is going to unlock the creativity of millions and millions of people for whom that wasn’t available.”

Very smart person that developer of smart software. The leap from the family to unlocking creativity is interesting. Do you think right wing political movements are zipping along on encrypted messaging apps, just wait until AI adds a turbo boost. That’s something to anticipate. Also, I like the idea of Google DeepMind taking “intelligence” and distilling it like a chef in a three star Michelin restaurant reducing a sauce with goose fat in it.

I also noted this statement:

I think this idea that we need to dismantle the state, we need to have maximum freedom – that’s really dangerous. On the other hand, I’m obviously very aware of the danger of centralized authoritarianism and, you know, even in its minuscule forms like nimbyism*. That’s why, in the book, we talk about a narrow corridor between the danger of dystopian authoritarianism and this catastrophe caused by openness. That is the big governance challenge of the next century: how to strike that balance. [Editor’s Note: Nimbyism means you don’t want a prison built adjacent to million dollar homes.]

Imagine, a Googler and DeepMinder looking down the Information Highway.

How are social constructs coping with the information revolution. If AI is an accelerant, what will go up in flames? One answer is Dr. Jeff Dean’s career. He was a casualty of a lateral arabesque because the DeepMind folks wanted to go faster. “Old” Dr. Dean was a turtle wandering down the Information Superhighway. He’s lucky he was swatted to the side of the road.

What are these folks doing? In my opinion, these statements do little to reduce my anxiety about the types of thinkers who knowingly create software systems purpose built to extend the control of a commercial enterprise. Without regulation, the dark flowers of some wizards are blooming in the Google walled garden.

One of these wizards is hoping that he is wrong about the negative impacts of smart software. Nice try, but it won’t work for me. Creating publicity and excuses are advertising. But that’s the business of Google, isn’t it. The core competence of some wizards is not moral or ethical action in my opinion. PR is good, however.

Stephen E Arnold, September 4, 2023

Surprised? Microsoft Drags Feet on Azure Security Flaw

September 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Microsoft has addressed a serious security flaw in Azure, but only after being called out by the cybersecurity firm that found the issue. It only took several months. Oh, and according to that firm, the “fix” only applies to new applications despite Microsoft’s assurances to the contrary. “Microsoft Fixes Flaw After Being Called Irresponsible by Tenable CEO,” Bleeping Computer reports. Writer Sergiu Gatlan describes the problem Tenable found within the Power Platform Custom Connectors feature:

“Although customer interaction with custom connectors usually happens via authenticated APIs, the API endpoints facilitated requests to the Azure Function without enforcing authentication. This created an opportunity for attackers to exploit unsecured Azure Function hosts and intercept OAuth client IDs and secrets. ‘It should be noted that this is not exclusively an issue of information disclosure, as being able to access and interact with the unsecured Function hosts, and trigger behavior defined by custom connector code, could have further impact,’ says cybersecurity firm Tenable which discovered the flaw and reported it on March 30th. ‘However, because of the nature of the service, the impact would vary for each individual connector, and would be difficult to quantify without exhaustive testing.’ ‘To give you an idea of how bad this is, our team very quickly discovered authentication secrets to a bank. They were so concerned about the seriousness and the ethics of the issue that we immediately notified Microsoft,’ Tenable CEO Amit Yoran added.”

Yes, that would seem to be worth a sense of urgency. But even after the eventual fix, this bank and any other organizations already affected were still vulnerable, according to Yoran. As far as he can tell, they weren’t even notified of the problem so they could mitigate their risk. If accurate, can Microsoft be trusted to keep its users secure going forward? We may have to wait for another crop of interns to arrive in Redmond to handle the work “real” engineers do not want to do.

Cynthia Murrell, September 5, 2023

Meta Play Tactic or Pop Up a Level. Heh Heh Heh

September 4, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Years ago I worked at a blue chip consulting firm. One of the people I interacted with had a rhetorical ploy when cornered in a discussion. The wizard would say, “Let’s pop up a level.” He would then shift the issue which was, for example, a specific problem, into a higher level concept and bring his solution into the bigger context.

8 31 pop up a level

The clever manager pop up a level to observe the lower level tasks from a broader view. Thank Mother MJ. Not what I specified, but the gradient descent is alive and well.

Let’s imagine that the topic is a plain donut or a chocolate covered donut with sprinkles. There are six people in the meeting. The discussion is contentious because that’s what blue chip consulting Type As do: Contention, sometime nice, sometimes not. The “pop up a level” guy says, “Let pop up a level. The plain donut has less sugar. We are concerned about everyone’s health, right? The plain donut does not have so much evil, diabetes linked sugar. It makes sense to just think of health and obviously the decreased risk for increasing the premiums for health insurance.” Unless one is paying attention and not eating the chocolate chip cookies provided for the meeting attendees, the pop-up-a-level approach might work.

A current example of pop-up-a-level thinking, navigate to “Designing Deep Networks to Process Other Deep Networks.” Nvidia is in hog heaven with the smart software boom. The company realizes that there are lots of people getting in the game. The number of smart software systems and methods, products and services, grifts and gambles, and heaven knows what else is increasing. Nvidia wants to remain the Big Dog even though some outfits wants to design their own chips or be like Apple and maybe find a way to do the Arm thing. Enter the pop-up-a-level play.

The write up says:

The solution is to use convolutional neural networks. They are designed in a way that is largely “blind” to the shifting of an image and, as a result, can generalize to new shifts that were not observed during training…. Our main goal is to identify simple yet effective equivariant layers for the weight-space symmetries defined above. Unfortunately, characterizing spaces of general equivariant functions can be challenging. As with some previous studies (such as Deep Models of Interactions Across Sets), we aim to characterize the space of all linear equivariant layers.

Translation: Our system and method can make use of any other accessible smart software plumbing. Stick with Nvidia.

I think the pop-up-a-level approach is a useful one. Are the competitors savvy enough to counter the argument?

Stephen E Arnold, September 4, 2023

Planning Ahead: Microsoft User Agreement Updates To Include New AI Stipulations

September 4, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Microsoft is eager to capitalize on its AI projects, but first it must make sure users are legally prohibited from poking around behind the scenes. For good measure, it will also ensure users take the blame if they misuse its AI tools. “Microsoft Limits Use of AI Services in Upcoming Services Agreement Update,” reports Ghacks.net. Writer Martin Brinkman notes these services include but are not limited to Bing Chat, Windows Copilot, Microsoft Security Copilot, Azure AI platform, and Teams Premium. We learn:

“Microsoft lists five rules regarding AI Services in the section. The rules prohibit certain activity, explain the use of user content and define responsibilities. The first three rules limit or prohibit certain activity. Users of Microsoft AI Services may not attempt to reverse engineer the services to explore components or rulesets. Microsoft prohibits furthermore that users extract data from AI services and the use of data from Microsoft’s AI Services to train other AI services. … The remaining two rules handle the use of user content and responsibility for third-party claims. Microsoft notes in the fourth entry that it will process and store user input and the output of its AI service to monitor and/or prevent ‘abusive or harmful uses or outputs.’ Users of AI Services are also solely responsible regarding third-party claims, for instance regarding copyright claims.”

Another, non-AI related change is that storage for one’s Outlook.com attachments will soon affect OneDrive storage quotas. That could be an unpleasant surprise for many when changes take effect on September 30. Curious readers can see a summary of the changes here, on Microsoft’s website.

Cynthia Murrell, September 4, 2023

Google: Another Modest Proposal to Solve an Existential Crisis. No Big Deal, Right?

September 1, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I am fascinated with corporate “do goodism.” Many people find themselves in an existential crisis anchored in zeros and ones. Is the essay submitted as original work the product of an industrious 15 year old? Or, is the essay the 10 second output of a smart software system like ChatGPT or You.com? Is that brilliant illustration the labor of a dedicated 22 year old laboring in a cockroach infested garage in in Corona, Queens? Or, was the art used in this essay output in about 60 seconds by my trusted graphic companion Mother MidJourney?

9 1 hidden pixel

“I see the watermark. This is a fake!” exclaims the precocious lad. This clever middle school student has identified the super secret hidden clue that this priceless image is indeed a fabulous fake. How could a young person detect such a sophisticated and subtle watermark? The is, “Let’s overestimate our capabilities and underestimate those of young people who are skilled navigators of the digital world?”

Queens and what’s this “modest proposal” angle. Jonathan Swift beat this horse until it died in the late 17th century. I think the reference makes a bit of sense. Mr. Swift proposed simple solutions to big problems. “DeepMind Develops Watermark to Identify AI Images” explains:

Google’s DeepMind is trialling [sic] a digital watermark that would allow computers to spot images made by artificial intelligence (AI), as a means to fight disinformation. The tool, named SynthID, will embed changes to individual pixels in images, creating a watermark that can be identified by computers but remains invisible to the human eye. Nonetheless, DeepMind has warned that the tool is not “foolproof against extreme image manipulation.

Righto, it’s good enough. Plus, the affable crew at Alphabet Google YouTube are in an ideal position to monitor just about any tiny digital thing in the interwebs. Such a prized position as de facto ruler of the digital world makes it easy to flag and remove offending digital content with the itty bitty teenie weeny  manipulated pixel thingy.

Let’s assume that everyone, including the young fake spotter in the Mother MJ image accompany this essay gets to become the de facto certifier of digital content. What are the downsides?

Gee, I give up. I cannot think of one thing that suggests Google’s becoming the chokepoint for what’s in bounds and what’s out of bounds. Everyone will be happy. Happy is good in our stressed out world.

And think of the upsides? A bug might derail some creative work? A system crash might nuke important records about a guilty party because pixels don’t lie? Well, maybe just a little bit. The Google intern given the thankless task of optimizing image analysis might stick in an unwanted instruction. So what? The issue will be resolved in a court, and these legal proceedings are super efficient and super reliable.

I find it interesting that the article does not see any problem with the Googley approach. Like the Oxford research which depended upon Facebook data, the truth is the truth. No problem. Gatekeepers and certification authority are exciting business concepts.

Stephen E Arnold, September 1, 2023

Regulating Smart Software: Let Us Form a Committee and Get Industry Advisors to Help

September 1, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The Boston Globe published what I thought was an amusing “real” news story about legislators and smart software. I know. I know. I am entering oxymoron land. The article is “The US Regulates Cars, Radio, and TV. When Will It Regulate AI? A number of passages received True Blue check marks.

8 26 stone age mobile

A person living off the grid works to make his mobile phone deliver generative content to solve the problem of … dinner. Thanks, MidJourney. You did a Stone Age person but you would not generate a street person. How helpful!

Let me share two passages and then offer a handful of observations.

How about this statement attributed to Microsoft’s Brad Smith. He is the professional who was certain Russia organized 1,000 programmers to figure out the SolarWinds’ security loopholes. Yes, that Brad Smith. The story quotes him as saying:

“We should move quickly,” Brad Smith, the president of Microsoft, which launched an AI-powered version of its search engine this year, said in May. “There’s no time for waste or delay,” Chuck Schumer, the Senate majority leader, has said. “Let’s get ahead of this,” said Sen. Mike Rounds, R-S.D.

Microsoft moved fast. I think the reason was to make Google look stupid. Both of these big outfits know that online services aggregate and become monopolistic. Microsoft wants to be the AI winner. Microsoft is not spending extra time helping elected officials understand smart software or the stakes on the digital table. No way.

The second passage is:

Historically, regulation often happens gradually as a technology improves or an industry grows, as with cars and television. Sometimes it happens only after tragedy.

Please, read the original “real” news story for Captain Obvious statements. Here are a few observations:

  1. Smart software is moving along at a reasonable clip. Big bucks are available to AI outfits in Germany and elsewhere. Something like 28 percent of US companies are fiddling with AI. Yep, even those raising chickens have AI religion.
  2. The process of regulation is slow. We have a turtle and a hare situation. Nope, the turtle loses unless an exogenous power kills the speedy bunny.
  3. If laws were passed, how would one get fast action to apply them? How is the FTC doing? What about the snappy pace of the CDC in preparing for the next pandemic?

Net net: Yes, let’s understand AI.

Stephen E Arnold, September 1, 2023.

The statement aligns with my experience.

YouTube Content: Are There Dark Rabbit Holes in Which Evil Lurks? Come On Now!

September 1, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Google has become a cultural touchstone. The most recent evidence is a bit of moral outrage in Popular Science. Now the venerable magazine is PopSci.com, and the Google has irritated the technology explaining staff. Navigate to “YouTube’s Extremist Rabbit Holes Are Deep But Narrow.”

8 30 shout

Google, your algorithm is creating rabbit holes. Yes, that is a technical term,” says the PopSci technology expert. Thanks for a C+ image MidJourney.

The write up asserts:

… exposure to extremist and antagonistic content was largely focused on a much smaller subset of already predisposed users. Still, the team argues the platform “continues to play a key role in facilitating exposure to content from alternative and extremist channels among dedicated audiences.” Not only that, but engagement with this content still results in advertising profits.

I think the link with popular science is the “algorithm.” But the write up seems to be more a see-Google-is-bad essay. Science? No. Popular? Maybe?

The essay concludes with this statement:

While continued work on YouTube’s recommendation system is vital and admirable, the study’s researchers echoed that, “even low levels of algorithmic amplification can have damaging consequences when extrapolated over YouTube’s vast user base and across time.” Approximately 247 million Americans regularly use the platform, according to recent reports. YouTube representatives did not respond to PopSci at the time of writing.

I find the use of the word “admirable” interesting. Also, I like the assertion that algorithms can do damage. I recall seeing a report that explained social media is good and another study pitching the idea that bad digital content does not have a big impact. Sure, I believe these studies, just not too much.

Google has a number of buns in the oven. The firm’s approach to YouTube appears to be “emulate Elon.” Content moderation will be something with a lower priority than keeping tabs on Googlers who don’t come to the office or do much Google work. My suggestion for Popular Science is to do a bit more science, and a little less quasi-MBA type writing.

Stephen E Arnold, September 1, 2023

« Previous Page

  • Archives

  • Recent Posts

  • Meta