More Push Back Against US Wild West Tech

September 12, 2024

I spotted another example of a European nation state expressing some concern with American high-technology companies. There is not wind blown corral on Leidsestraat. No Sergio Leone music creeps out the observers. What dominates the scene is a judicial judgment firing a US$35 million fine at Clearview AI. The company has a database of faces, and the information is licensed to law enforcement agencies. What’s interesting is that Clearview does not do business in the Netherlands; nevertheless, the European Union’s data protection act, according to Dutch authorities, has been violated. Ergo: Pay up.

The Dutch Are Having None of Clearview AI Harvesting Your Photos” reports:

“Following investigation, the DPA confirmed that photos of Dutch citizens are included in the database. It also found that Clearview is accountable for two GDPR breaches. The first is the collection and use of photos….The second is the lack of transparency. According to the DPA, the startup doesn’t offer sufficient information to individuals whose photos are used, nor does it provide access to which data the company has about them.”

Clearview is apparently unhappy with the judgment.

Several observations:

First, the decision is part of what might be called US technology pushback. The Wild West approach to user privacy has to get out of Dodge.

Second, Clearview may be on the receiving end of more fines. The charges may appear to be inappropriate because Clearview does not operate in the Netherlands. Other countries may decide to go after the company too.

Third, the Dutch action may be the first of actions against US high-technology companies.

Net net: If the US won’t curtail the Wild West activities of its technology-centric companies, the Dutch will.

Stephen E Arnold, September 12, 2024

Google and Search: A Fix or a Pipe Dream?

September 6, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Dawn of a New Era in Search: Balancing Innovation, Competition, and Public Good.”

Don’t get me wrong. I think multiple search systems are a good thing. The problem is that search (both enterprise and Web) are difficult problems, and these problems are expensive to solve. After working more than 50 years in electronic information, I have seen search systems come and go. I have watched systems morph from search into weird products that hide the search plumbing beneath fancy words like business intelligence and OSINT tools, among others. In 2006 or 2007, one of my financial clients published some of our research. The bank received an email from an “expert” (formerly and Verity) that his firm had better technology than Google. In that conversation, that “expert” said, “I can duplicate Google search for $300 million.” The person who said these incredibly uninformed words is now head of search at Google. Ed Zitron has characterized the individual as the person who killed Google search. Well, that fellow and Google search are still around. This suggests that baloney and high school reunions provide a career path for some people. But search is not understood particularly well at Google at this time. It is, therefore, that awareness of the problems of search is still unknown to judges, search engine marketing experts, developers of metasearch systems which recycle Bing results, and most of the poohbahs writing about search in blogs like Beyond Search.

image

The poor search kids see the rich guy with lots of money. The kids want it. The situation is not fair to those with little or nothing. Will the rich guy share the money? Thanks, Microsoft Copilot. Good enough. Aren’t you one of the poor Web search vendors?

After five decades of arm wrestling with finding on point information for myself, my clients, and for the search-related start ups with whom I have worked, I have an awareness of how much complexity the word “search” obfuscates. There is a general perception that Google indexes the Web. It doesn’t. No one indexes the Web. What’s indexed are publicly exposed Web pages which a crawler can access. If the response is slow (like many government and underfunded personal / commercial sites), spiders time out. The pages are not indexed. The crawlers have to deal in a successful way with the changes on how Web pages are presented. Upon encountering something for which the crawler is not configured, the Web page is skipped. Certain Web sites are dynamic. The crawler has to cope with these. Then there are Web pages which are not composed of text. The problems are compounded by the vagaries of intermediaries’ actions; for example, what’s being blocked or filtered today? The answer is the crawler skips them.

Without revealing information I am not permitted to share, I want to point out that crawlers have a list which contains bluebirds, canaries, and dead ducks. The bluebirds are indexed by crawlers on an aggressive schedule, maybe multiple times every hour. The canaries are the index-on-a-normal-cycle, maybe once every day or two. The dead ducks are crawled when time permits. Some US government Web sites may not be updated in six or nine months. The crawler visits the site once every six months or even less frequently. Then there are forbidden sites which the crawler won’t touch. These are on the open Web but urls are passed around via private messages. In terms of a Web search, these sites don’t exist.

How much does this cost? The answer is, “At scale, a lot. Indexing a small number of sites is really cheap.” The problem is that in order to pull lots of clicks, one has to have the money to scale or a niche no one else is occupying. Those are hard to find, and when one does, it makes sense to slap a subscription fee on them; for example, POISINDEX.

Why am I running though what strikes me as basic information about searching the Web? “Dawn of a New Era in Search: Balancing Innovation, Competition, and Public Good” is interesting and does a good job of expressing a specific view of Web search and Google’s content and information assets. I want to highlight the section of the write up titled “The Essential Facilities Doctrine.” The idea is that Google’s search index should be made available to everyone. The idea is interesting, and it might work after legal processes in the US were exhausted. The gating factor will be money and the political climate.

From a competitor’s point of view, the index blended with new ideas about how to answer a user’s query would level the playing field. From Google’s point of view it would loss of intellectual property.

Several observations:

  1. The hunger to punish Big Tech seems to demand being satisfied. Something will come from the judicial decision that Google is a monopoly. It took a couple of decades to arrive at what was obvious to some after the Yahoo ad technology settlement prior to the IPO, but most people didn’t and still don’t get “it.” So something will happen. What is not yet known.
  2. Wide access to the complete Google index could threaten the national security of the US. Please, think about this statement. I can’t provide any color, but it is a consideration among some professionals.
  3. An appeal could neutralize some of the “harms,” yet allow the indexing business to continue. Specific provisions might be applied to the decision of Judge Mehta. A modified landscape for search could be created, but online services tend to coalesce into efficient structures. Like the break up of AT&T, the seven Baby Bells and Bell Labs have become AT&T and Verizon. This could happen if “ads” were severed from Web search. But after a period of time, the break up is fighting one of the Arnold Laws of Online: A single monopoly is more efficient and emergent.

To sum up, the time for action came and like a train in Switzerland, left on time. Undoing Google is going to be more difficult than fiddling with Standard Oil or the railroad magnates.

Stephen E Arnold, September 6, 2024

Uber Leadership May Have to Spend Money to Protect Drivers. Wow.

September 5, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Senior managers — now called “leadership” — care about their employees. I added a wonderful example about corporate employee well being and co-worker sensitivity when I read “Wells Fargo Employee Found Dead in Her Cubicle 4 Days After She Clocked in for Work.” One of my team asked me, “Will leadership at that firm check her hours of work so she is not overpaid for the day she died?” I replied, “You will make a wonderful corporate leader one day.” Another analyst asked, “Didn’t the cleaning crew notice?” I replied, “Not when they come once every two weeks.”

image

Thanks, MSFT Copilot. Good enough given your filters.

A similar approach to employee care popped up this morning. My newsreader displayed this headline: “Ninth Circuit Rules Uber Had Duty to Protect Washington Driver Murdered by Passengers.” The write up reported:

The estate of Uber driver Cherno Ceesay sued the rideshare company for negligence and wrongful death in 2021, arguing that Uber knew drivers were at risk of violent assault from passengers but neglected to install any basic safety measures, such as barriers between the front and back seats of Uber vehicles or dash cameras. They also claimed Uber failed to employ basic identity-verification technology to screen out the two customers who murdered Ceesay — Olivia Breanna-Lennon Bebic and Devin Kekoa Wade — even though they opened the Uber account using a fake name and unverified form of payment just minutes before calling for the ride.

Hold it right there. The reason behind the alleged “failure” may be the cost of barriers, dash cams, and identity verification technology. Uber is a Big Dog high technology company. Its software manages rides, maps, payments, and the outstanding Uber app. If you want to know where your driver is, text the professional. Want to know the percentage of requests matched to drivers from a specific geographic point, forget that, gentle reader. Request a ride and wait for a confirmation. Oh, what if a pick up is cancelled after a confirmation? Fire up Lyft, right?

The cost of providing “basic” safety for riders is what helps make old fashioned taxi rides slightly more “safe.” At one time, Uber was cheaper than a weirdly painted taxi with a snappy phone number like 666 6666 or 777 7777 painted on the side. Now that taxis have been stressed by Uber, the Uber rides have become more expensive. Thanks to surge pricing, Uber in some areas is more expensive than taxis and some black car services if one can find one.

Uber wants cash and profits. “Basic” safety may add the friction of additional costs for staff, software licenses, and tangibles like plastic barriers and dash cams. The write up explains by quoting the legalese of the court decision; to wit:

“Uber alone controlled the verification methods of drivers and riders, what information to make available to each respective party, and consistently represented to drivers that it took their safety into consideration Ceesay relied entirely on Uber to match him with riders, and he was not given any meaningful information about the rider other than their location,” the majority wrote.

Now what? I am no legal eagle. I think Uber “leadership” will have meetings. Appropriate consultants will be retained to provide action plan options. Then staff (possibly AI assisted) will figure out how to reduce the probability of a murder in or near an Uber contractor’s vehicle.

My hunch is that the process will take time. In the meantime, I wonder if the Uber app autofills the “tip” section and then intelligently closes out that specific ride? I am confident that universities offering business classes will incorporate one or both of these examples in a class about corporate “leadership” principles. Tip: The money matters. Period.

Stephen E Arnold, September 5, 2024

Accountants: The Leaders Like Philco

September 4, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

AI or smart software has roiled the normal routine of office gossip. We have shifted from “What is it?” to “Who will be affected next?” The integration of AI into work processes, however, is not a new thing. Most people don’t know or don’t recall that when a consultant could do a query from a clunky device like the Texas Instrument Silent 700, AI was already affecting jobs. Whose? Just ask a special librarian who worked when an intermediary was not needed to retrieve information from an online database.

image

A nervous smart robot running state-of-the-art tax software is sufficiently intelligent to be concerned about the meeting with an IRS audit team. Thanks, MSFT Copilot. How’s that security push coming along? Oh, too bad.

I read “Why America’s Most Boring Job Is on the Brink of Extinction.” I think the story was crafted by a person who received either a D or an F in Accounting 100. The lingo links accountants with being really dull people and the nuking of an entire species. No meteor is needed; just smart software, the silent killer. By the way, my two accountants are quite sporty. I rarely fall asleep when they explain life from their point of view. I listen, and I urge you to be attentive as well. Smart software can do some excellent things, but not everything related to tax, financial planning, and keeping inside the white lines of the quite fluid governmental rules and regulations.

Nevertheless, the write up cited above states:

Experts say the industry is nearing extinction because the 150-hour college credit rule, the intense entry exam and long work hours for minimal pay are unappealing to the younger generation.

The “real” news article includes some snappy quotes too. Here’s one I circled: “’The pay is crappy, the hours are long, and the work is drudgery, and the drudgery is especially so in their early years.’”

I am not an accountant, so I cannot comment on the accuracy of this statement. My father was an accountant, and he was into detail work and was able to raise a family. None of us ended up in jail or in the hospital after a gang fight. (I was and still am a sissy. Imagine that: An 80 year old dinobaby sissy with the DNA of an accountant. I am definitely exciting.)

With fewer people entering the field of accounting, the write up makes a remarkable statement:

… Accountants are becoming overworked and it is leading to mistakes in their work. More than 700 companies cited insufficient staff in accounting and other departments as a reason for potential errors in their quarterly earnings statements…

Does that mean smart software will become the accountants of the future? Some accountants may hope that smart software cannot do accounting. Others will see smart software as an opportunity to improve specific aspects of accounting processes. The problem, however, is not the accountants. The problem will AI is the companies or entrepreneurs who over promise and under deliver.

Will smart software replace the insight and timeline knowledge of an experienced numbers wrangler like my father or the two accountants upon whom I rely?

Unlikely. It is the smart software vendors and their marketers who are most vulnerable to the assertions about Philco, the leader.

Stephen E Arnold, September 4, 2024

Elastic N.V. Faces a New Search Challenge

September 2, 2024

green-dino_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Elastic N.V. and Shay Banon are what I call search survivors. Gone are Autonomy (mostly), Delphis, Exalead, Fast Search & Transfer (mostly), Vivisimo, and dozens upon dozens of companies who sought to put an organization’s information at an employee’s fingertips. The marketing lingo of these and other now-defunct enterprise search vendors is surprisingly timely. Once can copy and paste chunks of Autonomy’s white papers into the OpenAI ChatGPT search is coming articles and few would notice that the assertions and even the word choice was more than 40 years old.

Elastic N.V. survived. It rose from a failed search system called Compass. Elastic N.V. recycled the Lucene libraries, released the open source Elasticsearch, and did an IPO. Some people made a lot of money. The question is, “Will that continue?”

I noted the Silicon Angle article “Elastic Shares Plunge 25% on Lower Revenue Projections Amid Slower Customer Commitments.” That write up says:

In its earnings release, Chief Executive Officer Ash Kulkarni started positively, noting that the results in the quarter we solid and outperformed previous guidance, but then comes the catch and the reason why Elastic stock is down so heavily after hours. “We had a slower start to the year with the volume of customer commitments impacted by segmentation changes that we made at the beginning of the year, which are taking longer than expected to settle,” Kulkarni wrote. “We have been taking steps to address this, but it will impact our revenue this year.” With that warning, Elastic said that it expects fiscal second-quarter adjusted earnings per share of 37 to 39 cents on revenue of $353 million to $355 million. The earnings per share forecast was ahead of the 34 cents expected by analysts, but revenue fell short of an expected $360.8 million. It was a similar story for Elastic’s full-year outlook, with the company forecasting earnings per share of $1.52 to $1.56 on revenue of $1.436 billion to $1.444 billion. The earnings per share outlook was ahead of an expected $1.42, but like the second quarter outlook, revenue fell short, as analysts had expected $1.478 billion.

Elastic N.V. makes money via service and for-fee extras. I want to point out that the $300 million or so revenue numbers are good. Elastic B.V. has figured out a business model that has not required [a] fiddling the books, [b] finding a buyer as customers complain about problems with the search software, [c] the sources of financing rage about cash burn and lousy revenue, [d] government investigators are poking around for tax and other financial irregularities, [e] the cost of running the software is beyond the reach of the licensee, or [f] the system simply does not search or retrieve what the user wanted or expected.

image

Elastic B.V. and its management team may have a challenge to overcome. Thanks, OpenAI, the MSFT Copilot thing crashed today.

So what’s the fix?

A partial answer appears in the Elastic B.V. blog post titled “Elasticsearch Is Open Source, Again.” The company states:

The tl;dr is that we will be adding AGPL as another license option next to ELv2 and SSPL in the coming weeks. We never stopped believing and behaving like an open source community after we changed the license. But being able to use the term Open Source, by using AGPL, an OSI approved license, removes any questions, or fud, people might have.

Without slogging through the confusion between what Elastic B.V. sells, the open source version of Elasticsearch, the dust up with Amazon over its really original approach to search inspired by Elasticsearch, Lucid Imagination’s innovation, and the creaking edifice of A9, Elastic B.V. has released Elasticsearch under an additional open source license. I think that means one can use the software and not pay Elastic B.V. until additional services are needed. In my experience, most enterprise search systems regardless of how they are explained need the “owner” of the system to lend a hand. Contrary to the belief that smart software can do enterprise search right now, there are some hurdles to get over.

Will “going open source again” work?

Let me offer several observations based on my experience with enterprise search and retrieval which reaches back to the days of punch cards and systems which used wooden rods to “pull” cards with a wanted tag (index term):

  1. When an enterprise search system loses revenue momentum, the fix is to acquire companies in an adjacent search space and use that revenue to bolster the sales prospects for upsells.
  2. The company with the downturn gilds the lily and seeks a buyer. One example was the sale of Exalead to Dassault Systèmes which calculated it was more economical to buy a vendor than to keep paying its then current supplier which I think was Autonomy, but I am not sure. Fast Search & Transfer pulled of this type of “exit” as some of the company’s activities were under scrutiny.
  3. The search vendor can pivot from doing “search” and morph into a business intelligence system. (By the way, that did not work for Grok.)
  4. The company disappears. One example is Entopia. Poof. Gone.

I hope Elastic B.V. thrives. I hope the “new” open source play works. Search — whether enterprise or Web variety — is far from a solved problem. People believe they have the answer. Others believe them and license the “new” solution. The reality is that finding information is a difficult challenge. Let’s hope the “downturn” and “negativism” goes away.

Stephen E Arnold, September 2, 2024

The Seattle Syndrome: Definitely Debilitating

August 30, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I think the film “Sleepless in Seattle” included dialog like this:

What do they call it when everything intersects?
The Bermuda Triangle.”

Seattle has Boeing. The company is in the news not just for doors falling off its aircraft. The outfit has stranded two people in earth orbit and has to let Elon Musk bring them back to earth. And Seattle has Amazon, an outfit that stands behind the products it sells. And I have to include Intel Labs, not too far from the University of Washington, which is famous in its own right for many things.

image

Two job seekers discuss future opportunities in some of Seattle and environ’s most well-known enterprises. The image of the city seems a bit dark. Thanks, MSFT Copilot. Are you having some dark thoughts about the area, its management talent pool, and its commitment to ethical business activity? That’s a lot of burning cars, but whatever.

Is Seattle a Bermuda Triangle for large companies?

This question invites another; specifically, “Is Microsoft entering Seattle’s Bermuda Triangle?

The giant outfit has entered a deal with the interesting specialized software and consulting company Palantir Technologies Inc. This firm has a history of ups and downs since its founding 21 years ago. Microsoft has committed to smart software from OpenAI and other outfits. Artificial intelligence will be “in” everything from the Azure Cloud to Windows. Despite concerns about privacy, Microsoft wants each Windows user’s machine to keep screenshot of what the user “does” on that computer.

Microsoft seems to be navigating the Seattle Bermuda Triangle quite nicely. No hints of a flash disaster like the sinking of the sailing yacht Bayesian. Who could have predicted that? (That’s a reminder that fancy math does not deliver 1.000000 outputs on a consistent basis.

Back to Seattle. I don’t think failure or extreme stress is due to the water. The weather, maybe? I don’t think it is the city government. It is probably not the multi-faceted start up community nor the distinctive vocal tones of its most high profile podcasters.

Why is Seattle emerging as a Bermuda Triangle for certain firms? What forces are intersecting? My observations are:

  1. Seattle’s business climate is a precursor of broader management issues. I think it is like the pigeons that Greeks examined for clues about their future.
  2. The individuals who works at Boeing-type outfits go along with business processes modified incrementally to ignore issues. The mental orientation of those employed is either malleable or indifferent to downstream issues. For example, Windows update killed printing or some other function. The response strikes me as “meh.”
  3. The management philosophy disconnects from users and focuses on delivering financial results. Those big houses come at a cost. The payoff is personal. The cultural impacts are not on the radar. Hey, those quantum Horse Ridge things make good PR. What about the new desktop processors? Just great.

Net net: I think Seattle is a city playing an important role in defining how businesses operate in 2024 and beyond. I wish I was kidding. But I am bedeviled by reminders of a space craft which issues one-way tickets, software glitches, and products which seem to vary from the online images and reviews. (Maybe it is the water? Bermuda Triangle water?)

Stephen E Arnold, August 30, 2024

What Is a Good Example of AI Enhancing Work Processes? Klarna

August 30, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Klarna is a financial firm in Sweden. (Did you know Sweden has a violence problem?) The country also has a company which is quite public about the value of smart software to its operations. “‘Our Chatbots Perform The Tasks Of 700 People’: Buy Now, Pay Later Company Klarna To Axe 2,000 Jobs As AI Takes On More Roles” reports:

Klarna has already cut over 1,000 employees and plans to remove nearly 2,000 more

Yep, that’s the use case. Smart software allows the firm’s leadership to terminate people. (Does that managerial attitude contribute to the crime problem in Sweden? Of course not. The company is just being efficient.)

The write up states:

Klarna claims that its AI-powered chatbot can handle the workload previously managed by 700 full-time customer service agents. The company has reduced the average resolution time for customer service inquiries from 11 minutes to two while maintaining consistent customer satisfaction ratings compared to human agents.

What’s the financial payoff for this leader in AI deployment? The write up says:

Klarna reported a 73 percent increase in average revenue per employee compared to last year.

Klarna, however, is humane. According to the article:

Notably, none of the workforce reductions have been achieved through layoffs. Instead, the company has relied on a combination of natural staff turnover and a hiring freeze implemented last year.

That’s a relief. Some companies would deploy Microsoft software with AI and start getting rid of people. The financial benefits are significant. Plus, as long as the company chugs along in good enough mode, the smart software delivers a win for the firm.

Are there any downsides? None in the write up. There is a financial payoff on the horizon. The article states:

In July [2024], Chrysalis Investments, a major Klarna investor, provided a more recent valuation estimate, suggesting that the fintech firm could achieve a valuation between 15 billion and 20 billion dollars in an initial public offering.

But what if the AI acts like a brake on firm’s revenue growth and sales? Hey, this is an AI success. Why be negative? AI is wonderful and Klarna’s customers appear to be thrilled with smart software. I personally love speaking to smart chatbots, don’t you?

Stephen E Arnold, August 30, 2024

Google Microtransaction Enabler: Chrome Beefs Up Its Monetization Options

August 29, 2024

green-dino_thumb_thumb_thumb_thumb_t[1]_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

For its next trick, Google appears to be channeling rival Amazon. We learn from TechRadar that “Google Is Developing a New Web Monetization Feature for Chrome that Could Really Change the Way We Pay for Things Online.” Will this development distract anyone from the recent monopoly ruling?

Writer Kristina Terech explains how Web Monetization will work for commercial websites:

“In a new support document published on the Google Chrome Platform Status site, Google explains that Web Monetization is a new technology that will enable website owners ‘to receive micro payments from users as they interact with their content.’ Google states its intention is noble, writing that Web Monetization is designed to be a new option for webmasters and publishers to generate revenue in a direct manner that’s not reliant on ads or subscriptions. Google explains that with Web Monetization, users would pay for content while they consume it. It’s also added a new HTML link element for websites to add to their URL address to indicate to the Chrome browser that the website supports Web Monetization. If this is set correctly in the website’s URL, for websites that facilitate users setting up digital wallets on it, when a person visits that website, a new monetization session would be created (for that person) on the site. I’m immediately skeptical about monetizing people’s attention even further than it already is, but Google reassures us that visitors will have control over the whole process, like the choice of sites they want to reward in this way and how much money they want to spend.”

But like so many online “choices,” how many users will pay enough attention to make them? I share Terech’s distaste for attention monetization, but that ship has sailed. The danger here (or advantage, for merchants): Many users will increase their spending by barely noticeable amounts that add up to a hefty chunk in the end. On the other hand, the feature could reduce costly processing charges by eliminating per-payment fees for merchants. Whether end users see those savings, though, depends on whether vendors choose to pass them along.

Cynthia Murrell, August 29, 2024

AI Snake Oil Hisses at AI

August 23, 2024

green-dino_thumb_thumb_thumb_thumb_t[1]_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Enthusiasm for certain types of novel software or gadgets rises and falls. The Microsoft marketing play with OpenAI marked the beginning of the smart software hype derby. Google got the message and flipped into Red Alert mode. Now about 20 months after Microsoft’s announcement about its AI tie up with Sam AI-Man, we have Google’s new combo: AI in a mobile phone. Bam! Job done. Slam dunk.

image

Thanks, MSFT Copilot. On top of the IPv6 issue? Oh, too bad.

I wonder if the Googlers were thinking along the same logical lines at the authors of “AI Companies Are Pivoting from Creating Gods to Building Products. Good.”

The snake oil? Dripping. Here’s a passage from the article I noted:

AI companies are collectively planning to spend a trillion dollars on hardware and data centers, but there’s been relatively little to show for it so far.

A trillion? That’s a decent number. Sam AI-Man wants more, but the scale is helpful, particularly when most numbers are mere billions in the zoom zoom world of smart software.

The most important item in the write up, in my opinion, is the list of five “challenges.” The article focuses on consumer AI. A couple of these apply to the enterprise sector as well. Let’s look at the five “challenges.” These are and, keep in mind, I a paraphrasing as dinobabies often do:

  1. Cost. In terms of consumers, one must consider making Hamster Kombat smart. (This is a Telegram dApp.) My team informed me that this little gem has 35 million users, and it is still growing. Imagine the computational cost to infuse each and every Hamster Kombat “game” player with AI goodness. But it’s a game and a distributed one at that, one might say. Someone has to pay for these cycles. And Hamster Kombat is not on the radar of most consumers’ radar. Telegram has about 950 million users, so 35 million users comes from that pool. What are the costs of AI infused games outside of a walled garden. And the hardware? And the optimization engineering? And the fooling around with ad deals? Costs are not a hurdle. Costs might be a Grand Canyon-scale leap into a financial mud bank.
  2. Reliability. Immature systems and methods, training content issues (real and synthetic), and the fancy math which uses a lot of probability procedures guarantees some interesting outputs.
  3. Privacy. The consumer or user facing services are immature. Developers want to get something to most work in a good enough manner. Then security may be discussed. But on to the next feature. As a result, I am not sure if anyone has a decent grasp of the security issues which smart software might pose. Look at Microsoft. It’s been around almost half a century, and I learn about new security problems every day. Is smart software different?
  4. Safety and security. This is a concomitant to privacy. Good luck knowing what the systems do or do not do.
  5. User interface. I am a dinobaby. The interfaces are pale, low contrast, and change depending on what a user clicks. I like stability. Smart software simply does not comprehend that word.

Good points. My view is that the obstacle to surmount is money. I am not sure that the big outfits anticipated the costs of their sally into the hallucinating world of AI. And what are those costs, pray tell. Here’s are selected items the financial managers at the Big Dogs are pondering along with the wording of their updated LinkedIn profile:

  • Litigation. Remarks by some icons of the high technology sector have done little to assuage the feelings of those whose content was used without permission or compensation. Some, some people. A few Big Dogs are paying cash to scrape.
  • Power. Yep, electricity, as EV owners know, is not really free.
  • Water, Yep, modern machines produce heat if what I learned in physics was actual factual.
  • People (until they can be replaced by a machine that does not require health care or engage in signing petitions).
  • Data and indexing. Yep, still around and expensive.
  • License fees. They are comin’ round the mountain of legal filings.
  • Meals, travel and lodging. Leadership will be testifying, probably a lot.
  • PR advisors and crisis consultants. See the first bullet, Litigation.

However, slowly but surely some commercial sectors are using smart software. There is an AI law firm. There are dermatologists letting AI determine what to cut, freeze, or ignore. And there are college professors using AI to help them do “original” work and create peer-review fodder.

There was a snake in the Garden of Eden, right?

Stephen E Arnold, August 23, 2024

Google Leadership Versus Valued Googlers

August 23, 2024

green-dino_thumb_thumb_thumb_thumb_t[1]This essay is the work of a dumb dinobaby. No smart software required.

The summer in rural Kentucky lingers on. About 2,300 miles away from the Sundar & Prabhakar Comedy Show’s nerve center, the Alphabet Google YouTube DeepMind entity is also “cyclonic heating from chaotic employee motion.” What’s this mean? Unsteady waters? Heat stroke? Confusion? Hallucinations? My goodness.

The Google leadership faces another round of employee pushback. I read “Workers at Google DeepMind Push Company to Drop Military Contracts.

How could the Google smart software fail to predict this pattern? My view is that smart software has some limitations when it comes to managing AI wizards. Furthermore, Google senior managers have not been able to extract full knowledge value from the tools at their disposal to deal with complexity. Time Magazine reports:

Nearly 200 workers inside Google DeepMind, the company’s AI division, signed a letter calling on the tech giant to drop its contracts with military organizations earlier this year, according to a copy of the document reviewed by TIME and five people with knowledge of the matter. The letter circulated amid growing concerns inside the AI lab that its technology is being sold to militaries engaged in warfare, in what the workers say is a violation of Google’s own AI rules.

Why are AI Googlers grousing about military work? My personal view is that the recent hagiography of Palantir’s Alex Karp and the tie up between Microsoft and Palantir for Impact Level 5 services means that the US government is gearing up to spend some big bucks for warfighting technology. Google wants — really needs — this revenue. Penalties for its frisky behavior as what Judge Mehta describes and “monopolistic” could put a hit in the git along of Google ad revenue. Therefore, Google’s smart software can meet the hunger militaries have for intelligent software to perform a wide variety of functions. As the Russian special operation makes clear, “meat based” warfare is somewhat inefficient. Ukrainian garage-built drones with some AI bolted on perform better than a wave of 18 year olds with rifles and a handful of bullets. The example which sticks in my mind is a Ukrainian drone spotting a Russian soldier in the field partially obscured by bushes. The individual is attending to nature’s call.l The drone spots the “shape” and explodes near the Russian infantry man.

image

A former consultant faces an interpersonal Waterloo. How did that work out for Napoleon? Thanks, MSFT Copilot. Are you guys working on the IPv6 issue? Busy weekend ahead?

Those who study warfare probably have their own ah-ha moment.

The Time Magazine write up adds:

Those principles state the company [Google/DeepMind] will not pursue applications of AI that are likely to cause “overall harm,” contribute to weapons or other technologies whose “principal purpose or implementation” is to cause injury, or build technologies “whose purpose contravenes widely accepted principles of international law and human rights.”) The letter says its signatories are concerned with “ensuring that Google’s AI Principles are upheld,” and adds: “We believe [DeepMind’s] leadership shares our concerns.”

I love it when wizards “believe” something.

Will the Sundar & Prabhakar brain trust do believing or banking revenue from government agencies eager to gain access to advantage artificial intelligence services and systems? My view is that the “believers” underestimate the uncertainty arising from potential sanctions, fines, or corporate deconstruction the decision of Judge Mehta presents.

The article adds this bit of color about the Sundar & Prabhakar response time to Googlers’ concern about warfighting applications:

The [objecting employees’] letter calls on DeepMind’s leaders to investigate allegations that militaries and weapons manufacturers are Google Cloud users; terminate access to DeepMind technology for military users; and set up a new governance body responsible for preventing DeepMind technology from being used by military clients in the future. Three months on from the letter’s circulation, Google has done none of those things, according to four people with knowledge of the matter. “We have received no meaningful response from leadership,” one said, “and we are growing increasingly frustrated.”

“No meaningful response” suggests that the Alphabet Google YouTube DeepMind rhetoric is not satisfactory.

The write up concludes with this paragraph:

At a DeepMind town hall event in June, executives were asked to respond to the letter, according to three people with knowledge of the matter. DeepMind’s chief operating officer Lila Ibrahim answered the question. She told employees that DeepMind would not design or deploy any AI applications for weaponry or mass surveillance, and that Google Cloud customers were legally bound by the company’s terms of service and acceptable use policy, according to a set of notes taken during the meeting that were reviewed by TIME. Ibrahim added that she was proud of Google’s track record of advancing safe and responsible AI, and that it was the reason she chose to join, and stay at, the company.

With Microsoft and Palantir, among others, poised to capture some end-of-fiscal-year money from certain US government budgets, the comedy act’s headquarters’ planners want a piece of the action. How will the Sundar & Prabhakar Comedy Act handle the situation? Why procrastinate? Perhaps the comedy act hopes the issue will just go away. The complaining employees have short attention spans, rely on TikTok-type services for information, and can be terminated like other Googlers who grouse, picket, boycott the Foosball table, or quiet quit while working on a personal start up.

The approach worked reasonably well before Judge Mehta labeled Google a monopoly operation. It worked when ad dollars flowed like latte at Philz Coffee. But today is different, and the unsettled personnel are not a joke and add to the uncertainty some have about the Google we know and love.

Stephen E Arnold, August 23, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta