More Google PR: For an Outfit with an Interesting Past, Chattiness Is Now a Core Competency

May 23, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

How many speeches, public talks, and interviews did Sergey Brin, Larry Page, and Eric Schmidt do? To my recollection, not too many. And what about now? Larry Page is tough to find. Mr. Brin is sort of invisible. Eric Schmidt has backed off his claim that Qwant keeps him up at night? But Sundar Pichai, one half of the Sundar and Prabhakar Comedy Show, is quite visible. AI everywhere keynote speeches, essays about smart software, and now an original “he wrote it himself” essay in the weird salmon-tinted newspaper The Financial Times. Yeah, pinkish.

5 23 fast talking salesman

Smart software provided me with an illustration of a fast talker pitching the future benefits of a new product. Yep, future probabilities. Rock solid. Thank you, MidJourney.

What’s with the spotlight on the current Google big wheel? Gentle reader, the visibility is one way Google is trying to advance its agenda. Before I offer my opinion about the Alphabet Google YouTube agenda, I want to highlight three statements in “Google CEO: building AI Responsibly Is the Only Race That Really Matters.”

Statement from the Google essay #1

At Google, we’ve been bringing AI into our products and services for over a decade and making them available to our users. We care deeply about this. Yet, what matters even more is the race to build AI responsibly and make sure that as a society we get it right.

The theme is that Google has been doing smart software for a long time. Let’s not forget that the GOOG released the Transformer model as open source and sat on its Googley paws while “stuff happened” starting in 2018. Was that responsible? If so, what does Google mean when it uses the word “responsible” as it struggles to cope with the meme “Google is late to the game.” For example, Microsoft pulled off a global PR coup with its Davos’ smart software announcements. Google responded with the Paris demonstration of Bard, a hoot for many in the information retrieval killing field. That performance of the Sundar and Prabhakar Comedy Show flopped. Meanwhile, Microsoft pushed its “flavor” of AI into its enterprise software and cloud services. My experience is that for every big PR action, there is an equal or greater PR reaction. Google is trying to catch faster race cars with words, not a better, faster, and cheaper machine. The notion that Google “gets it right” means to me one thing: Maintaining quasi monopolistic control of its market and generating the ad revenue. Google, after 25 years of walking the same old Chihuahua in a dog park with younger, more agile canines. After 25 years of me too and flopping with projects like solving death, revenue is the ONLY thing that matters to stakeholders. More of the Sundar and Prabhakar routine are wearing thin.

Statement from the Google essay #2

We have many examples of putting those principles into practice…

The “principles” apply to Google AI implementation. But the word principles is an interesting one. Google is paying fines for ignoring laws and its principles. Google is under the watchful eye of regulators in the European Union due to Google’s principles. China wanted Google to change and then beavered away on a China-acceptable search system until the cat was let out of the bag. Google is into equality, a nice principle, which was implemented by firing AI researchers who complained about what Google AI was enabling. Google is not the outfit I would consider the optimal source of enlightenment about principles. High tech in general and Google in particular is viewed with increasing concern by regulators in US states and assorted nation states. Why? The Googley notion of principles is not what others understand the word to denote. In fact, some might say that Google operates in an unprincipled manner. Is that why companies like Foundem and regulatory officials point out behaviors which some might find predatory, mendacious, or illegal? Principles, yes, principles.

Statement from the Google essay #3

AI presents a once-in-a-generation opportunity for the world to reach its climate goals, build sustainable growth, maintain global competitiveness and much more.

Many years ago, I was in a meeting in DC, and the Donald Rumsfeld quote about information was making the rounds. Good appointees loved to cite this Donald.Here’s the quote from 2002:

There are known knowns; there are things we know we know.  We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know.

I would humbly suggest that smart software is chock full of known unknowns. But humans are not very good at predicting the future. When it comes to acting “responsibly” in the face of unknown unknowns, I dismiss those who dare to suggest that humans can predict the future in order to act in a responsible manner. Humans do not act responsibly with either predictability or reliability. My evidence is part of your mental furniture: Racism, discrimination, continuous war, criminality, prevarication, exaggeration, failure to regulate damaging technologies, ineffectual action against industrial polluters, etc. etc. etc.

I want to point out that the Google essay penned by one half of the Sundar and Prabhakar Comedy Show team could be funny if it were not a synopsis of the digital tragedy of the commons in which we live.

Stephen E Arnold, May 23, 2023

Please, World, Please, Regulate AI. Oh, Come Now, You Silly Goose

May 23, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The ageing heart of capitalistic ethicality is beating in what some cardiologists might call arrhythmia. Beating fast and slow means that the coordinating mechanisms are out of whack. What’s the fix? Slam in an electronic gizmo for the humanoid. But what about a Silicon Valley with rhythm problems: Terminating employees, legal woes, annoying elected officials, and teen suicides? The outfits poised to make a Nile River of cash from smart software are doing the “begging” thing.

523 will they fall for reg small

The Gen X whiz kid asks the smart software robot: “Will the losers fall for the call to regulate artificial intelligence?” The smart software robot responds, “Based on a vector and matrix analysis, there is a 75 to 90 percent probability that one or more nation states will pass laws to regulate us.” The Gen X whiz kid responds, “Great, I hate doing the begging and pleading thing.” The illustration was created by my old pal, MidJourney digital emulators.

OpenAI Leaders Propose International Regulatory body for AI” is a good summation of the “please, regulate AI even though it is something most people don’t understand and a technology whose downstream consequences are unknown.” The write up states:

…AI isn’t going to manage itself…

We have some first hand experience with Silicon Valley wizards who [a] allow social media technology to destroy the fabric of civil order, [b] control information frames so that hidden hands can cause irrelevant ads to bedevil people looking for a Thai restaurant, [c] ignoring laws of different nation states because the fines are little more than the cost of sandwiches at an off site meeting, and [d] sporty behavior under the cover of attendance at industry conferences (why did a certain Google Glass marketing executive try to kill herself and the yacht incident with a controlled substance and subsequent death?).

What fascinated me was the idea that an international body should regulate smart software. The international bodies did a bang up job with the Covid speed bump. The United Nations is definitely on top of the situation in central Africa. And the International Criminal Court? Oh, right, the US is not a party to that organization.

What’s going on with these calls for regulation? In my opinion, there are three vectors for this line of begging, pleading, and whining.

  1. The begging can be cited as evidence that OpenAI and its fellow travelers tried to do the right thing. That’s an important psychological ploy so the company can go forward and create a Terminator version of Clippy with its partner Microsoft
  2. The disingenuous “aw, shucks” approach provides a lousy make up artist with an opportunity to put lipstick on a pig. The shoats and hoggets look a little better than some of the smart software champions. Dim light and a few drinks can transform a boarlet into something spectacular in the eyes of woozy venture capitalists
  3. Those pleading for regulation want to make sure their company has a fight chance to dominate the burgeoning market for smart software methods. After all, the ageing Googzilla is limping forward with billions of users who will chow down on the deprecated food available in the Google cafeterias.

At least Marie Antoinette avoided the begging until she was beheaded. Apocryphal or not, she held on the “Let them eat mille-feuille. But the blade fell anyway.

PS. There allegedly will be ChatGPT 5.0. Isn’t that prudent?

Stephen E Arnold, May 23, 2023

No kidding?

Neeva: Another Death from a Search Crash on the Information Highway

May 22, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

What will forensic search experts find when they examine the remains of Neeva? The “gee, we failed” essay “Next Steps for Neeva” presents one side of what might be an interesting investigation for a bushy tailed and wide eyed Gen Z search influencer. I noted some statements which may have been plucked from speeches at the original Search Engine Conferences ginned up by an outfit in the UK or academic post mortems at the old International Online Meeting once held in the companionable  Olympia London.

I noted these statements from the cited document:

Statement 1: The users of a Web search system

We started Neeva with the mission to take search back to its users.

The reality is that 99 percent of people using a Web search engine are happy when sort of accurate information is provided free. Yep, no one wants to pay for search. That’s the reason that when a commercial online service like LexisNexis loses one big client, it is expensive, time consuming, and difficulty to replace the revenue. One former LexisNexis big wheel told me when we met in his limousine in the parking lot of the Cherry Hill Mall: “If one of the top 100 law firms goes belly up, we need a minimum of 200 new law firms to sign up for our service and pay for it.”

5 12 mommy I failed

“Mommy, I failed Search,” says Timmy Neeva. Mrs. Neeva says, “What caused your delusional state, Timmy.” The art work is a result of the smart software MidJourney.

Users don’t care about for fee search when those users wouldn’t know whether a hit in a results list was right, mostly right, mostly wrong, or stupidly crazy. Free is the fuel that pulls users, and without advertising, there’s no chance a free service will be able to generate enough cash to index, update the index, and develop new features. At the same time, the plumbing is leaking. Plumbing repairs are expensive: New machines, new ways to reduce power consumption, and oodles of new storage devices.

Users want free. Users don’t want to compare the results from a for fee service and a free service. Users want free. After 25 years, the Google is the champion of free search. Like the old Xoogler search system Search2, Neeva’s wizards never figured that most users don’t care about Fancy Dan yip yap about search.

Statement 2: An answer engine.

We rallied the Neeva team around the vision to create an answer engine.

Shades of DR-LINK: Users want answers. In 1981, a former Predicasts’ executive named Paul Owen told me, “Dialog users want answers.” That sounds logical, and it is to many who are expert informationists the Gospel according to Online. The reality is that users want crunchy, bite sized chunks of information which appear to answer the question or almost right answers that are “good enough” or “close enough for horseshoes.”

Users cannot differentiate from correct and incorrect information. Heck, some developers of search engines don’t know the difference between weaponized information and content produced by a middle school teacher about the school’s graduation ceremony. Why? Weaponized information is abundant; non-weaponized information may not pass the user’s sniff test. And the middle school graduation ceremony may have a typo about the start time or the principal of the school changed his mind due to an active shooter situation. Something output from a computer is believed to be credible, accurate, and “right.” An answer engine is what a free Web search engine spits out. The TikTok search spits out answers, and no one wonders if the results list are shaped by Chinese interests.

Search and retrieval has been defined by Google. The company has a 90 plus percent share of the Web search traffic in North America and Western Europe. (In Denmark, the company has 99 percent of Danish users’ search traffic. People in Denmark are happier, and it is not because Google search delivers better or more accurate results. Google is free and it answers questions.

The baloney about it takes one-click to change search engines sounds great. The reality is as Neeva found out, no one wants to click away from what is perceived to work for them. Neeva’s yip yap about smart software proves that the jazz about artificial intelligence is unlikely to change how free Web search works in Google’s backyard. Samsung did not embrace Bing because users would rebel.

Answer engine. Baloney. Users want something free that will make life easier; for example, a high school student looking for a quick way to crank out a 250 word essay about global warming or how to make a taco. ChatGPT is not answering questions; the application is delivering something that is highly desirable to a lazy student. By the way, at least the lazy student had the git up and go to use a system to spit out a bunch of recycled content that is good enough. But an answer engine? No, an online convenience store is closer to the truth.

Statement 3:

We are actively exploring how we can apply our search and LLM expertise in these settings, and we will provide updates on the future of our work and our team in the next few weeks.

My interpretation of this statement is that a couple of Neeva professionals will become venture centric. Others will become consultants. A few will join the handful of big companies which are feverishly trying to use “smart software” to generate more revenue. Will there be some who end up working at Philz Coffee. Yeah, some. Perhaps another company will buy the “code,” but valuing something that failed is likely to prove tricky. Who remembers who bought Entopia? No one, right?

Net net: The GenZ forensic search failure exercise will produce some spectacular Silicon Valley news reporting. Neeva is explaining its failure, but that failure presaged when Fast Search & Transfer pivoted from Web search to the enterprise, failed, and was acquired by Microsoft. Where is Fast Search now as the smart Bing is soon to be everywhere. The reality is that Google has had 25 years to do cement its search monopoly. Neeva did not read the email. So Neeva sucked up investment bucks with a song and dance about zapping the Big Bad Google with a death ray. Yep, another example of high school science club mentality touched by spreadsheet fever.

Well, the fever broke.

Stephen E Arnold, May 22, 2023

Google DeepMind Risk Paper: 60 Pages with a Few Googley Hooks

May 22, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved in writing, just a dumb humanoid.

I read the long version of “Ethical and Social Risks of Harm from Language Models.” The paper is mostly statements and footnotes to individuals who created journal-type articles which prove the point of each research article. With about 25 percent of the peer reviewed research including shaped, faked, or weaponized data – I am not convinced by footnotes. Obviously the DeepMinders believe that footnotes make a case for the Google way. I am not convinced because the Google has to find a way to control the future of information. Why? Advertising money and hoped for Mississippis of cash.

The research paper dates from 2021 and is part of Google’s case for being ahead of the AI responsibility game. The “old” paper reinforces the myth that Google is ahead of everyone else in the AI game. The explanation for Sam AI-man’s and Microsoft’s markeitng coup is that Google had to go slow because Google knew that there were ethical and social risks of harm from the firm’s technology. Google cares about humanity! The old days of “move fast and break things” are very 1998. Today Google is responsible. The wild and crazy dorm days are over. Today’s Google is concerned, careful, judicious, and really worried about its revenues. I think the company worries about legal actions, its management controversies, and its interdigital dual with the Softies of Redmond.

5 17 hunting for footnotes 2

A young researcher desperately seeking footnotes to support a specious argument. With enough footnotes, one can move the world it seems. Art generated by the smart software MidJourney.

I want to highlight four facets of the 60 page risks paper which are unlikely to get much, if any, attention from today’s “real” journalists.

Googley hook 1: Google wants to frame the discussion. Google is well positioned to “guide mitigation work.” The examples in the paper are selected to “guiding action to resolve any issues that can be identified in advance.” My comment: How magnanimous of Google. Framing stakes out the Googley territory. Why? Google wants to be Googzilla and reap revenue from its users, licensees, models, synthetic data, applications, and advertisers. You can find the relevant text in the paper on page 6 in the paragraph beginning “Responsible innovation.”

Googley hook 2: Google’s risks paper references fuzzy concepts like “acceptability” and “fair.” Like love, truth, and ethics, the notion of “acceptability” is difficult to define. Some might suggest that it is impossible to define. But Google is up to the task, particularly for application spaces unknown at this time. What happens when you apply “acceptability” to “poor quality information.” One just accepts the judgment of the outfit doing the framing. That’s Google. Game. Set. Match. You can find the discussion of “acceptability” on page 9.

Googley hook 3: Google is not going to make the mistake of Microsoft and its racist bot Tay. No way, José. What’s interesting is that the only company mentioned in the text of the 60 page paper is Microsoft. Furthermore, the toxic aspects of large language models are hard for technologies to detect (page18). Plus large language models can infer a person’s private data. So “providing true information is not always beneficial (Page 21). What’s the fix? Use smaller sets of training data… maybe. (Page 22). But one can fall back on trust — for instance, trust in Google the good — to deal with these challenges. In fact, trust Google to choose training data to deal with some of the downsides of large language models (Page 24).

Googley hook 4: Making smart software dependent on large language models that mitigates risk is expensive. Money, smart people who are in short supply, and computing resources are expensive.  Therefore, one need not focus on the origin point (large language model training and configuration). Direct attention at those downstream. Those users can deal with the identified 21 problems. The Google method puts Google out of the primary line of fire. There are more targets for the aggrieved to seek and shoot at (Page 37).

When I step back from the article which is two years old, it is obvious Google was aware of some potential issues with its approach. Dr. Timnit Gebru was sacrificed on a pyre of spite. (She does warrant a couple of references and a footnote or two. But she’s now a Xoogler. The one side effect was that Dr. Jeff Dean, who was not amused by the stochastic parrot has been kicked upstairs and the UK “leader” is now herding the little wizards of Google AI.

The conclusion of the paper echoes the Google knows best argument. Google wants a methodological toolkit because that will keep other people busy. Google wants others to figure out fair, an approach that is similar to Sam Altman (OpenAI) who begs for regulation of a sector about which much is unknown.

The answer, according to the risk analysis is “responsible innovation.” I would suggest that this paper, the television interviews, the PR efforts to get the Google story in as many places as possible are designed to make the sluggish Google a player in the AI game.

Who will be fooled? Will Google catch up in this Silicon Valley venture invigorating hill climb? For me the paper with the footnotes is just part of Google’s PR and marketing effort. Your mileage may vary. May relevance be with you, gentle reader.

Stephen  E Arnold, May 22, 2023

AI Is Alive. Plus It Loves Me. No One Else Does. Sigh

May 22, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Those young whiz kids from Stanford University have come up with an idea sure to annoy some in the AI Funland of Frenzy. Imagine. Some bright young sprouts suggest that smart software is alive or “emergent” in the lingo of the day is a more about the researcher than about the smart software.

5 20 eureka i found it

“I know my system is alive. She loves me. She understands me.” — The words of a user who believes his system is alive and loves him. Imagined by good old heavily filtered MidJourney.

Don’t believe me? Navigate to “Are Emergent Abilities of Large Language Models a Mirage?” The write up suggests, based on the authors’ research obviously:

we find strong supporting evidence that emergent abilities may not be a fundamental property of scaling AI models.

One Googler wobbled very close to hiring a lawyer for his flavor of smart software. Others believe that ChatGPT is really talking with them. A New York Times technology expert learned that his smart software wanted the humanoid to ditch his significant other.

What’s happening is that the humanoid projects human characteristics on to the software. One can watch this behavior in its many hues by observing how owners of French bulldogs treat their animals. The canine receives treats, clothes, and attention. The owner talks to the dog. The owner believes the dog is on the same wavelength as the owner. In fact, one Frenchie owner professed love for the pet. (I know this from direct observation.)

If you want to learn more about personification and weird identification with software, read the 16 page paper. Alternatively, you can accept my summary. Excuse me. I have to say good morning to my ChatGPT session. I know it misses me.

Stephen E Arnold, May 22, 2023

ChatBots: For the Faithful Factually?

May 19, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I spotted Twitch’s AI-fueled ask_Jesus. You can take a look at this link. The idea is that smart software responds in a way a cherished figure would. If you watch the questions posed by registered Twitchers, you can wait a moment and the ai Jesus will answer the question. Rather than paraphrase or quote the smart software, I suggest you navigate to this Bezos bulldozer property and check out the “service.”

I mention the Amazon offering because I noted another smart religion robot write up called “India’s Religious AI Chatbots Are Speaking in the Voice of God and Condoning Violence.” The article touches upon several themes which I include in my 2023 lecture series about the shadow Web and misinformation from bad actors and wonky smart software.

This Rest of World article reports:

In January 2023, when ChatGPT was setting new growth records, Bengaluru-based software engineer Sukuru Sai Vineet launched GitaGPT. The chatbot, powered by GPT-3 technology, provides answers based on the Bhagavad Gita, a 700-verse Hindu scripture. GitaGPT mimics the Hindu god Krishna’s tone — the search box reads, “What troubles you, my child?”

The trope is for the “user” to input a question and the smart software outputs a response. But there is not just Sukuru’s version. There are allegedly five GitaGPTs available “with more on the way.”

The article includes a factoid in a quote allegedly from a human AI researcher; to wit:

Religion is the single largest business in India.

I did not know this. I thought it was outsourced work. product. Live and learn.

Is there some risk with religious chatbots? The write up states:

Religious chatbots have the potential to be helpful, by demystifying books like the Bhagavad Gita and making religious texts more accessible, Bindra said. But they could also be used by a few to further their own political or social interests, he noted. And, as with all AI, these chatbots already display certain political biases. [The Bindra is Jaspreet Bindra, AI researcher and author of The Tech Whisperer]

I don’t want to speculate what the impact of a religious chatbot might be if the outputs were tweaked for political or monetary purposes.

I will leave that to you.

Stephen E Arnold, May 19, 2023

Thinking about AI: Is It That Hard?

May 17, 2023

I read “Why I’m Having Trouble Covering AI: If You Believe That the Most Serious Risks from AI Are Real, Should You Write about Anything Else?” The version I saw was a screenshot, presumably to cause me to go to Platformer in order to interact with it. I use smart software to convert screenshots into text, so the risk reduced by the screenshot was in the mind of the creator.

Here’s a statement I underlined:

The reason I’m having trouble covering AI lately is because there is such a high variance in the way that the people who have considered the question most deeply think about risk.

My recollection is that Daniel Kahneman allegedly cooked up the idea of “prospect theory.” As I understand the idea, humans are not very good when thinking about risk. In fact, some people take risks because they think that a problem can be averted. Other avoid risk to because omission is okay; for example, reporting a financial problem. Why not just leave it out and cook up a footnote? Omissions are often okay with some government authorities.

I view the AI landscape from a different angle.

First, smart software has been chugging along for many years. May I suggest you fire up a copy of Microsoft Word, use it with its default settings, and watch how words are identified, phrases underlined, and letters automatically capitalized? How about using Amazon to buy lotion. Click on the buy now button and navigate to the order page. It’s magic. Amazon has used software to perform tasks which once required a room with clerks. There are other examples. My point is that the current baloney roll is swelling from its own gaseous emissions.

Second, the magic of ChatGPT outputting summaries was available 30 years ago from Island Software. Stick in the text of an article, and the desktop system spit out an abstract. Was it good? If one were a high school student, it was. If you were building a commercial database product fraught with jargon, technical terms, and abstruse content, it was not so good. Flash forward to now. Bing, You.com, and presumably the new and improved Bard are better. Is this surprising? Nope. Thirty years of effort have gone into this task of making a summary. Am I to believe that the world will end because smart software is causing a singularity? I am not reluctant to think quantum supremacy type thoughts. I just don’t get too overwrought.

Third, using smart software and methods which have been around for centuries — yep, centuries — is a result of easy-to-use tools being available at low cost or free. I find You.com helpful; I don’t pay for it. I tried Kagi and Teva; not so useful and I won’t pay for it. Swisscows.com work reasonably well for me. Cash conserving and time saving are important. Smart software can deliver this easily and economically. When the math does not work, then I am okay with manual methods. Will the smart software take over the world and destroy me as an Econ Talk guest suggested? Sure. Maybe? Soon. What’s soon mean?

Fourth, the interest in AI, in my opinion, is a result of several factors: [a] Interesting demonstrations and applications at a time when innovation becomes buying or trying to buy a game company, [b] avoiding legal interactions due to behavioral or monopoly allegations, [c] a deteriorating economy due to the Covid and free money events, [d] frustration among users with software and systems focused on annoying, not delighting, their users; [e] the inability of certain large companies to make management decisions which do not illustrate that high school science club thinking is not appropriate for today’s business world; [f] data are available; [g] computing power is comparatively cheap; [h] software libraries, code snippets, off-the-shelf models, and related lubricants are findable and either free to use or cheap; [i] obvious inefficiencies exist so a new tool is worth a try; and [j] the lure of a bright shiny thing which could make a few people lots of money adds a bit of zest to the stew.

Therefore, I am not confused, nor am I overly concerned with those who predict home runs or end-of-world outcomes.

What about big AI brains getting fired or quitting?

Three observations:

First, outfits like Facebook and Google type companies are pretty weird and crazy places. Individuals who want to take a measured approach or who are not interested in having 20-somethings play with their mobile when contributing to a discussion should get out or get thrown out. Scared or addled or arrogant big company managers want the folks to speak the same language, to be on the same page even it the messages are written in invisible ink, encrypted, and circulated to the high school science club officers.

Second, like most technologies chock full of jargon, excitement, and the odor of crisp greenbacks, expectations are high. Reality is often able to deliver friction the cheerleaders, believers, and venture capitalists don’t want to acknowledge. That friction exists and will make its presence felt. How quickly? Maybe Bud Light quickly? Maybe Google ad practice awareness speed? Who knows? Friction just is and like gravity difficult to ignore.

Third, the confusion about AI depends upon the lenses through which one observes what’s going on. What are these lenses? My team has identified five smart software lenses. Depending on what lens is in your pair of glasses and how strong the curvatures are, you will be affected by the societal lens, the technical lens, the individual lens (that’s the certain blindness each of us has), the political lens, and the financial lens. With lots to look at, the choice of lens is important. The inability to discern what is important depends on the context existing when the AI glasses are  perched on one’s nose. It is okay to be confused; unknowing adds the splash of Slap Ya Mama to my digital burrito.

Net net: Meta-reflections are a glimpse into the inner mind of a pundit, podcast star, and high-energy writer. The reality of AI is a replay of a video I saw when the Internet made online visible to many people, not just a few individuals. What’s happened to that revolution? Ads and criminal behavior. What about the mobile revolution? How has that worked out? From my point of view it creates an audience for technology which could, might, may, will, or whatever other forward forward word one wants to use. AI is going to glue together the lowest common denominator of greed with the deconstructive power of digital information. No Terminator is needed. I am used to being confused, and I am perfectly okay with the surrealistic world in which I live.

PS. We lectured two weeks ago to a distinguished group and mentioned smart software four times in two and one half hours. Why? It’s software. It has utility. It is nothing new. My prospect theory pegs artificial intelligence in the same category as online (think NASA Recon), browsing (think remote content to a local device), and portable phones (talking and doing other stuff without wires). Also, my Zepp watch stress reading is in the low 30s. No enlarged or cancerous prospect theory for me at this time.

Stephen E Arnold, May 17, 2023

Fake News Websites Proliferate Thanks AI!

May 16, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Technology has consequences. And in the case of advanced AI chatbots, it seems those who unleashed the tech on the world had their excuses ready. Gadgets360° shares the article, “ChatGPT-Like AI Chatbots Have Been Used to Create 49 News Websites: NewsGuard Report.” Though researchers discovered they were created with software like OpenAI’s ChatGPT and, possibly, Google Bard, none of the 49 “news” sites disclosed that origin story. Bloomberg reporter Davey Alba cites a report by NewsGuard that details how researchers hunted down these sites: They searched for phrases commonly found in AI-generated text using tools like CrowdTangle (a sibling of Facebook) and Meltwater. They also received help from the AI text classifier GPTZero. Alba writes:

“In several instances, NewsGuard documented how the chatbots generated falsehoods for published pieces. In April alone, a website called CelebritiesDeaths.com published an article titled, ‘Biden dead. Harris acting President, address 9 a.m.’ Another concocted facts about the life and works of an architect as part of a falsified obituary. And a site called TNewsNetwork published an unverified story about the deaths of thousands of soldiers in the Russia-Ukraine war, based on a YouTube video. The majority of the sites appear to be content farms — low-quality websites run by anonymous sources that churn-out posts to bring in advertising. The websites are based all over the world and are published in several languages, including English, Portuguese, Tagalog and Thai, NewsGuard said in its report. A handful of sites generated some revenue by advertising ‘guest posting’ — in which people can order up mentions of their business on the websites for a fee to help their search ranking. Others appeared to attempt to build an audience on social media, such as ScoopEarth.com, which publishes celebrity biographies and whose related Facebook page has a following of 124,000.”

Naturally, more than half the sites they found were running targeted ads. NewsGuard reasonably suggested AI companies should build in safeguards against their creations being used this way. Both OpenAI and Google point to existing review procedures and enforcement policies against misuse. Alba notes the situation is particularly tricky for Google, which profits from the ads that grace the fake news sites. After Bloomberg alerted it to the NewsGuard findings, the company did remove some ads from some of the sites.

Of course, posting fake and shoddy content for traffic, ads, and clicks is nothing new. But, as one expert confirmed, the most recent breakthroughs in AI technology make it much easier, faster, and cheaper. Gee, who could have foreseen that?

Cynthia Murrell, May 16, 2023

Architects: Handwork Is the Future of Some?

May 16, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I think there is a new category of blog posts. I call it “we will be relevant” essays. A good example is from Architizer and its essay “5 Reasons Why Architects Must Not Give Up On Hand Drawings and Physical Models: Despite the Rise of CAD, BIM and Now AI, Low-Tech Creative Mediums Remain of Vital Importance to Architects and Designers.” [Note: a BIM is an acronym for “business information modeling.”]

The write up asserts:

“As AI-generated content rapidly becomes the norm, I predict a counter-culture of manually-crafted creations, with the art of human imperfection and idiosyncrasy becoming marketable in its own right,” argued Architizer’s own Paul Keskeys in a recent Linkedin post.

The person doing the predicting is the editor of Architizer.

Now look at this architectural rendering of a tiny house. I generated it in a minute using MidJourney, a Jim Dandy image outputter.

tiny house 5 10 23

I think it looks okay. Furthermore, I think it is a short step from the rendering to smart software outputting the floor plans, bill of materials, a checklist of legal procedures to follow, the content of those legal procedures, and a ready-to-distribute tender. The notion of threading together pools of information into a workflow is becoming a reality if I believe the hot sauce doused on smart software when TWIST, Jason Calacanis’ AI-themed podcasts air. I am not sure the vision of some of the confections explored on this program are right around the corner, but the functionality is now in a rental cloud computer and ready to depart.

Why would a person wanting to buy a tiny house pay a human to develop designs, go through the grunt work of figuring out window sizes, and getting the specification ready for me to review. I just want a tiny house as reasonably priced as possible. I don’t want a person handcrafting a baby model with plastic trees. I don’t want a human junior intern plugging in the bits and pieces. I want software to do the job.

I am not sure how many people like me are thinking about tiny houses, ranch houses, or non-tilting apartment towers. I do think that architects who do not get with the smart software program will find themselves in a fight for survival.

The CAD, BIM, and AI are capabilities that evoke images of buggy whip manufacturers who do not shift to Tesla interior repairs.

Stephen E Arnold, May 16, 2023

ChatGPT Mind Reading: Sure, Plus It Is a Force for Good

May 15, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The potential of artificial intelligence, for both good and evil, just got bumped up another notch. Surprised? Neither are we. The Guardian reveals, “AI Makes Non-Invasive Mind-Reading Possible by Turning Thoughts into Text.” For 15 years, researchers at the University of Texas at Austin have been working on a way to help patients whose stroke, motor neuron disease, or other conditions have made it hard to communicate. While impressive, previous systems could translate brain activity into text only with the help of surgical implants. More recently, researchers found a way to do the same thing with data from fMRI scans. But the process was so slow as to make it nearly useless as a communication tool. Until now. Correspondent Hannah Devlin writes:

“However, the advent of large language models – the kind of AI underpinning OpenAI’s ChatGPT – provided a new way in. These models are able to represent, in numbers, the semantic meaning of speech, allowing the scientists to look at which patterns of neuronal activity corresponded to strings of words with a particular meaning rather than attempting to read out activity word by word. The learning process was intensive: three volunteers were required to lie in a scanner for 16 hours each, listening to podcasts. The decoder was trained to match brain activity to meaning using a large language model, GPT-1, a precursor to ChatGPT. Later, the same participants were scanned listening to a new story or imagining telling a story and the decoder was used to generate text from brain activity alone. About half the time, the text closely – and sometimes precisely – matched the intended meanings of the original words. ‘Our system works at the level of ideas, semantics, meaning,’ said Huth. ‘This is the reason why what we get out is not the exact words, it’s the gist.’ For instance, when a participant was played the words ‘I don’t have my driver’s license yet,’ the decoder translated them as ‘She has not even started to learn to drive yet’.”

That is a pretty good gist. See the write-up for more examples as well as a few limitations researchers found. Naturally, refinement continues. The study‘s co-author Jerry Tang acknowledges this technology could be dangerous in the hands of bad actors, but says they have “worked to avoid that.” He does not reveal exactly how. That is probably for the best.

Cynthia Murrell, May 15, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta