Curating Content: Not Really and Maybe Not at All

August 5, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Most people assume that if software is downloaded from an official “store” or from a “trusted” online Web search system, the user assumes that malware is not part of the deal. Vendors bandy about the word “trust” at the same time wizards in the back office are filtering, selecting, and setting up mechanisms to sell advertising to anyone who has money.

image

Advertising sales professionals are the epitome of professionalism. Google the word “trust”. You will find many references to these skilled individuals. Thanks, MSFT Copilot. Good enough.

Are these statements accurate? Because I love the high-tech outfits, my personal view is that online users today have these characteristics:

  1. Deep knowledge about nefarious methods
  2. The time to verify each content object is not malware
  3. A keen interest in sustaining the perception that the Internet is a clean, well-lit place. (Sorry, Mr. Hemingway, “lighted” will get you a points deduction in some grammarians’ fantasy world.)

I read “Google Ads Spread Mac Malware Disguised As Popular Browser.” My world is shattered. Is an alleged monopoly fostering malware? Is the dominant force in online advertising unable to verify that its advertisers are dealing from the top of the digital card deck? Is Google incapable of behaving in a responsible manner? I have to sit down. What a shock to my dinobaby system.

The write up alleges:

Google Ads are mostly harmless, but if you see one promoting a particular web browser, avoid clicking. Security researchers have discovered new malware for Mac devices that steals passwords, cryptocurrency wallets and other sensitive data. It masquerades as Arc, a new browser that recently gained popularity due to its unconventional user experience.

My assumption is that Google’s AI and human monitors would be paying close attention to a browser that seeks to challenge Google’s Chrome browser. Could I be incorrect? Obviously if the write up is accurate I am. Be still my heart.

The write up continues:

The Mac malware posing as a Google ad is called Poseidon, according to researchers at Malwarebytes. When clicking the “more information” option next to the ad, it shows it was purchased by an entity called Coles & Co, an advertiser identity Google claims to have verified. Google verifies every entity that wants to advertise on its platform. In Google’s own words, this process aims “to provide a safe and trustworthy ad ecosystem for users and to comply with emerging regulations.” However, there seems to be some lapse in the verification process if advertisers can openly distribute malware to users. Though it is Google’s job to do everything it can to block bad ads, sometimes bad actors can temporarily evade their detection.

But the malware apparently exists and the ads are the vector. What’s the fix? Google is already doing its typical A Number One Quantumly Supreme Job. Well, the fix is you, the user.

You are sufficiently skilled to detect, understand, and avoid such online trickery, right?

Stephen E Arnold, August 5, 2024

Judgment Before? No. Backing Off After? Yes.

August 5, 2024

I wanted to capture two moves from two technology giants. The first item is the report that Google pulled the oh-so-Googley ad about a father using Gemini to write personal note to his daughter. If you are not familiar with the burst of creative marketing, you can glean a few details from “Google Pulls Gemini AI Ad from Olympics after Backlash.” The second item is the report that according to Bloomberg, “Apple Pulls Commercial After Thai Backlash, Calls for Boycott.

I reacted to these two separate announcements by thinking about what these do it-reverse it decisions suggest about the management controls at two technology giants.

Some management processes operated to think up the ad ideas. Then the project had to be given the green light from “leadership” at the two outfits. Next third party providers had to be enlisted to do some of the “knowledge work”. Finally, I assume there were meetings to review the “creative.” Finally, one ad from several candidates was selected by each firm. The money paid. And then the ads appeared. That’s a lot of steps and probably more than two or three people working in a cube next to a Foosball tables.

Plus, the about faces by the two companies did not take much time. Google caved after a few days. Apple also hopped on its havester and chopped the India advertisement quickly as well. Decisiveness. Actually decisiveness after the fact.

Why not less obvious processes like using better judgment before releasing the advertisements? Why not focus on working with people who are more in tune with audience reactions than being clever, smooth talking, and desperate-eager for big company money?

Several observations:

  • Might I hypothesize that both companies lack a fabric of common sense?
  • If online ads “work,” why use what I would call old-school advertising methods? Perhaps the online angle is not correct for such important messaging from two companies that seem to do whatever they want most of the time?
  • The consequences of these do-then-undo actions are likely to be close to zero. Is that what operating in a no-consequences environment fosters?

I wonder if the back away mentality is now standard operating procedure. We have Intel and nVidia with some back-away actions. We have a nation state agreeing to a plea bargain and the un-agreeing the next day. We have a net neutraility rule, then don’t, then we do, and now we don’t. Now that I think about it, perhaps because there are no significant consequences, decision quality has taken a nose dive?

Some believe that great complexity sets the stage for bad decisions which regress to worse decisions.

Stephen E Arnold, August 5, 2024

MBAs Gone Wild: Assertions, Animation & Antics

August 5, 2024

Author’s note: Poor WordPress in the Safari browser is having a very bad day. Quotes from the cited McKinsey document appear against a weird blue background. My cheerful little dinosaur disappeared. And I could not figure out how to claim that AI did not help me with this essay. Just a heads up.

Holed up in rural Illinois, I had time to read the mid-July McKinsey & Company document “McKinsey Technology Trends Outlook 2024.” Imagine a group of well-groomed, top-flight, smooth talking “experts” with degrees from fancy schools filming one of those MBA group brainstorming sessions. Take the transcript, add motion graphics, and give audio sweetening to hot buzzwords. I think this would go viral among would-be consultants, clients facing the cloud of unknowing about the future. and those who manifest the Peter Principle. Viral winner! From my point of view, smart software is going to be integrated into most technologies and is, therefore, the trend. People may lose money, but applied AI is going to be with most companies for a long, long time.

The report boils down the current business climate to a few factors. Yes, when faced with exceptionally complex problems, boil those suckers down. Render them so only the tasty sales part remains. Thus, today’s businesss challenges become:

Generative AI (gen AI) has been a standout trend since 2022, with the extraordinary uptick in interest and investment in this technology unlocking innovative possibilities across interconnected trends such as robotics and immersive reality. While the macroeconomic environment with elevated interest rates has affected equity capital investment and hiring, underlying indicators—including optimism, innovation, and longer-term talent needs—reflect a positive long-term trajectory in the 15 technology trends we analyzed.

The data for the report come from inputs from about 100 people, not counting the people who converted the inputs into the live-action report. Move your mouse from one of the 15 “trends” to another. You will see the graphic display colored balls of different sizes. Yep, tiny and tinier balls and a few big balls tossed in.

I don’t have the energy to take each trend and offer a comment. Please, navigate to the original document and review it at your leisure. I can, however, select three trends and offer an observation or two about this very tiny ball selection.

Before sharing those three trends, I want to provide some context. First, the data gathered appear to be subjective and similar to the dorm outputs of MBA students working on a group project. Second, there is no reference to the thought process itself which when applied to a real world problem like boosting sales for opioids. It is the thought process that leads to revenues from consulting that counts.

Source: https://www.youtube.com/watch?v=Dfv_tISYl8A
Image from the ENDEVR opioid video.

Third, McKinsey’s pool of 100 thought leaders seems fixated on two things:

gen AI and electrification and renewables.

But is that statement comprised of three things? [1] AI, [2] electrification, and [3] renewables? Because AI is a greedy consumer of electricity, I think I can see some connection between AI and renewable, but the “electrification” I think about is President Roosevelt’s creating in 1935 the Rural Electrification Administration. Dinobabies can be such nit pickers.

Let’s tackle the electrification point before I get to the real subject of the report, AI in assorted forms and applications. When McKinsey talks about electrification and renewables, McKinsey means:

The electrification and renewables trend encompasses the entire energy production, storage, and distribution value chain. Technologies include renewable sources, such as solar and wind power; clean firm-energy sources, such as nuclear and hydrogen, sustainable fuels, and bioenergy; and energy storage and distribution solutions such as long-duration battery systems and smart grids.In 2019, the interest score for Electrification and renewables was 0.52 on a scale from 0 to 1, where 0 is low and 1 is high. The innovation score was 0.29 on the same scale. The adoption rate was scored at 3. The investment in 2019 was 160 on a scale from 1 to 5, with 1 defined as “frontier innovation” and 5 defined as “fully scaled.” The investment was 160 billion dollars. By 2023, the interest score for Electrification and renewables was 0.73. The innovation score was 0.36. The investment was 183 billion dollars. Job postings within this trend changed by 1 percent from 2022 to 2023.

Stop burning fossil fuels? Well, not quite. But the “save the whales” meme is embedded in the verbiage. Confused? That may be the point. What’s the fix? Hire McKinsey to help clarify your thinking.

AI plays the big gorilla in the monograph. The first expensive, hairy, yet promising aspect of smart software is replacing humans. The McKinsey report asserts:

Generative AI describes algorithms (such as ChatGPT) that take unstructured data as input (for example, natural language and images) to create new content, including audio, code, images, text, simulations, and videos. It can automate, augment, and accelerate work by tapping into unstructured mixed-modality data sets to generate new content in various forms.

Yep, smart software can produce reports like this one: Faster, cheaper, and good enough. Just think of the reports the team can do.

The third trend I want to address is digital trust and cyber security. Now the cyber crime world is a relatively specialized one. We know from the CrowdStrike misstep that experts in cyber security can wreck havoc on a global scale. Furthermore, we know that there are hundreds of cyber security outfits offering smart software, threat intelligence, and very specialized technical services to protect their clients. But McKinsey appears to imply that its band of 100 trend identifiers are hip to this. Here’s what the dorm-room btrainstormers output:

The digital trust and cybersecurity trend encompasses the technologies behind trust architectures and digital identity, cybersecurity, and Web3. These technologies enable organizations to build, scale, and maintain the trust of stakeholders.

Okay.

I want to mention that other trends range from blasting into space to software development appear in the list. What strikes me as a bit of an oversight is that smart software is going to be woven into the fabric of the other trends. What? Well, software is going to surf on AI outputs. And big boy rockets, not the duds like the Seattle outfit produces, use assorted smart algorithms to keep the system from burning up or exploding… most of the time. Not perfect, but better, faster, and cheaper than CalTech grads solving equations and rigging cybernetics with wire and a soldering iron.

Net net: This trend report is a sales document. Its purpose is to cause an organization familiar with McKinsey and the organization’s own shortcomings to hire McKinsey to help out with these big problems. The data source is the dorm room. The analysts are cherry picked. The tone is quasi-authoritative. I have no problem with marketing material. In fact, I don’t have a problem with the McKinsey-generated list of trends. That’s what McKinsey does. What the firm does not do is to think about the downstream consequences of their recommendations. How do I know this? Returning from a lunch with some friends in rural Illinois, I spotted two opioid addicts doing the droop.

Stephen E Arnold, August 5, 2024

The Big Battle: Another WWF Show Piece for AI

August 2, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

The Zuck believes in open source. It is like Linux. Boom. Market share. OpenAI believes in closed source (for now). Snap. You have to pay to get the good stuff. The argument about proprietary versus open source has been plodding along like Russia’s special operation for a long time. A typical response, in my opinion, is that open source is great because it allows a corporate interest to get cheap traction. Then with a surgical or not-so-surgical move, the big outfit co-opts the open source project. Boom. Semi-open source with a price tag becomes a competitive advantage. Proprietary software can be given away, licensed, or made available by subscription. Open source creates opportunities for training, special services, and feeling good about the community. But in the modern world of high-technology feeling good comes with sustainable flows of revenue and opportunities to raise prices faster than the local grocery store.

image

Where does open source software come from? Many students demonstrate their value by coding something useful to another. Thanks, Open AI. Good enough.

I read “Consider the Llama: Are Closed Source AI Models Doomed?” The write up is good. It contains a passage which struck me as interesting; to wit:

OpenAI, Anthropic and the like—companies that sell access to AI models. These companies inherently require their products to be much better than open source in order to up-charge. They also don’t have some other product they sell that gets improved with better AI overall.

In my opinion, in the present business climate, the hope that a high-technology product gets better is an interesting one. The idea of continual improvement, however, is not part of the business culture of high-technology companies engaged in smart software. At this time, cooking up a model which can be used to streamline or otherwise enhance an existing activity is Job One. The first outfit to generate substantial revenue from artificial intelligence will have an advantage. That doesn’t mean the outfit won’t fail, but if one considers the requirements to play with a reasonable probability of winning the AI game, smart software costs money.

In the world of online, a company or open source foundation which delivers a product or service which attracts large numbers of users has an advantage. One “play” can shift the playing field, not just win the game. What’s going on at this time, in my opinion, is that those who understand the advantage of winning in the equivalent of a WWF (World Wide Wrestling) show piece is that it allows the “winner take all” or at least the “winner takes two-thirds” of the market.

Monopolies (real or imagined) with lots of money have an advantage. Open source smart software have to have money from somewhere; otherwise, the costs of producing a winning service drop. If a large outfit with cash goes open source, that is a bold chess move which other outfits cannot afford to take. The feel good, community aspect of a smart software solution that can be used in a large number of use cases is going to fade quickly when any money on the table is taken by users who neither contribute, pay for training, or hire great open source coders as consultants. Serious players just take the software, innovate, and lock up the benefits.

“Who would do this?” some might ask.

How about China, Russia, or some nation state not too interested in the Silicon Valley way? How about an entrepreneur in Armenia or one of the Stans who wants to create a novel product or service and charge for it? Sure, US-based services may host the product or service, but the actual big bucks flow to the outfit who keeps the technology “secret”?

At this time, US companies which make high-value software available for free to anyone who can connect to the Internet and download a file are not helping American business. You may disagree. But I know that there are quite a few organizations (commercial and governmental) who think the US approach to open source software is just plain dumb.

Wrapping up an important technology with do-goodism and mostly faux hand waving about the community creates two things:

  1. An advantage for commercial enterprises who want to thwart American technical influence
  2. Free intelligence for nation-states who would like nothing more than convert the US into a client republic.

I did a job for a bunch of venture people who were into the open source religion. The reality is that at this time an alleged monopoly like Google can use its money and control of information flows to cripple other outfits trying to train their systems. On the other hand, companies who just want AI to work may become captive to an enterprise software vendor who is also an alleged monopoly. The companies funded by this firm have little chance of producing sustainable revenue. The best exits will be gift wrapping the “innovation” and selling it to another group of smart software-hungry investors.

Does the world need dozens of smart software “big dogs”? The answer is, “No.” At this time, the US is encouraging companies to make great strides in smart software. These are taking place. However, the rest of the world is learning and may have little or no desire to follow the open source path to the big WWF face off in the US.

The smart software revolution is one example of how America’s technology policy does not operate in a way that will cause our adversaries to do anything but download, enhance, build on, and lock up increasingly smarter AI systems.

From my vantage point, it is too late to undo the damage the wildness of the last few years can be remediated. The big winners in open source are not the individual products. Like the WWF shows, the winner is the promoter. Very American and decidedly different from what those in other countries might expect or want. Money, control, and power are more important than the open source movement. Proprietary may be that group’s preferred approach. Open source is software created by computer science students to prove they can produce code that does something. The “real” smart software is quite different.

Stephen E Arnold, August 2, 2024

Fancy Cyber Methods Are Useless Against Insider Threats

August 2, 2024

dinosaur30a_thumb_thumb_thumb_thumb__thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

In my lectures to law enforcement and intelligence professionals, I end the talks with one statement: “Do not assume. Do not reduce costs by firing experienced professionals. Do not ignore human analyses of available information. Do not take short cuts.” Cyber security companies are often like the mythical kids of the village shoemaker. Those who can afford to hire the shoemaker have nifty kicks and slides. Those without resources have almost useless footware.

Companies in the security business often have an exceptionally high opinion of their capabilities and expertise. I think of this as the Google Syndrome or what some have called by less salubrious names. The idea is that one is just so smart, nothing bad can happen here. Yeah, right.

image

An executive answers questions about a slight security misstep. Thanks, Microsoft Copilot. You have been there and done that I assume.

I read “North Korean Hacker Got Hired by US Security Vendor, Immediately Loaded Malware.” The article is a reminder that outfits in the OSINT, investigative, and intelligence business can make incredibly interesting decisions. Some of these lead to quite significant consequences. This particular case example illustrates how a hiring process using humans who are really smart and dedicated can be fooled, duped, and bamboozled.

The write up explains:

KnowBe4, a US-based security vendor, revealed that it unwittingly hired a North Korean hacker who attempted to load malware into the company’s network. KnowBe4 CEO and founder Stu Sjouwerman described the incident in a blog post yesterday, calling it a cautionary tale that was fortunately detected before causing any major problems.

I am a dinobaby, and I translated the passage to mean: “We hired a bad actor but, by the grace of the Big Guy, we avoided disaster.”

Sure, sure, you did.

I would suggest you know you trapped an instance of the person’s behavior. You may not know and may never know what that individual told a colleague in North Korea or another country what the bad actor said or emailed from a coffee shop using a contact’s computer. You may never know what business processes the person absorbed, converted to an encrypted message, and forwarded via a burner phone to a pal in a nation-state whose interests are not aligned with America’s.

In short, the cyber security company dropped the ball. It need not feel too bad. One of the companies I worked for early in my 60 year working career hired a person who dumped top secrets into journalists’ laps. Last week a person I knew was complaining about Delta Airlines which was shown to be quite addled in the wake of the CrowdStrike misstep.

What’s the fix? Go back to how I end my lectures. Those in the cyber security business need to be extra vigilant. The idea that “we are so smart, we have the answer” is an example of a mental short cut. The fact is that the company KnowBe4 did not. It is lucky it KnewAtAll. Some tips:

  1. Seek and hire vetted experts
  2. Question procedures and processes in “before action” and “after action” incidents
  3. Do not rely on assumptions
  4. Do not believe the outputs of smart software systems
  5. Invest in security instead of fancy automobiles and vacations.

Do these suggestions run counter to your business goals and your image of yourself? Too bad. Life is tough. Cyber crime is the growth business. Step up.

Stephen E Arnold, August 2, 2024

Survey Finds Two Thirds of us Believe Chatbots Are Conscious

August 2, 2024

Well this is enlightening. TechSpot reports, “Survey Shows Many People Believe AI Chatbots Like ChatGPT Are Conscious.” And by many, writer Rob Thubron means two-thirds of those surveyed by researchers at the University of Waterloo. Two-thirds! We suppose it is no surprise the general public has this misconception. After all, even an AI engineer was famously convinced his company’s creation was sentient. We learn:

“The survey asked 300 people in the US if they thought ChatGPT could have the capacity for consciousness and the ability to make plans, reason, feel emotions, etc. They were also asked how often they used OpenAI’s product. Participants had to rate ChatGPT responses on a scale of 1 to 100, where 100 would mean absolute confidence that ChatGPT was experiencing consciousness, and 1 absolute confidence it was not. The results showed that the more someone used ChatGPT, the more they were likely to believe it had some form of consciousness. ‘These results demonstrate the power of language,’ said Dr. Clara Colombatto, professor of psychology at Waterloo’s Arts faculty, ‘because a conversation alone can lead us to think that an agent that looks and works very differently from us can have a mind.’”

That is a good point. And these “agents” will only get more convincing even as more of us interact with them more often. It is encouraging that some schools are beginning to implement AI Literacy curricula. These programs include important topics like how to effectively work with AI, when to double-check its conclusions, and a rundown of ethical considerations. More to the point here, they give students a basic understanding of what is happening under the hood.

But it seems we need a push for adults to educate themselves, too. Even a basic understanding of machine learning and LLMs would help. It will take effort to thwart our natural tendency to anthropomorphize, which is reinforced by AI hype. That is important, because when we perceive AI to think and feel as we do, we change how we interact with it. The write-up notes:

“The study, published in the journal Neuroscience of Consciousness, states that this belief could impact people who interact with AI tools. On the one hand, it may strengthen social bonds and increase trust. But it may also lead to emotional dependence on the chatbots, reduce human interactions, and lead to an over-reliance on AI to make critical decisions.”

Soon we might even find ourselves catering to perceived needs of our software (or the actual goals of the firms that make them) instead of using them as inanimate tools. Is that a path we really want to go down? Is it too late to avoid it?

Cynthia Murrell, August 2, 2024

Yep, the Old Internet Is Gone. Learn to Love the New Internet

August 1, 2024

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

The market has given the Google the green light to restrict information. The information highway has a new on ramp. If you want content created by people who were not compensated, you have to use Google search. Toss in the advertising system and that good old free market is going to deliver bumper revenue to stakeholders.

image

Online search is a problem. Here’s an old timer like me who broke his leg. The young wizard who works at a large online services firm explains that I should not worry. By the time my leg heals, I will be dead. Happy thoughts from one of those Gen somethings. Thanks, MSFT Copilot. How your security systems today?

What about users? The reality is that with Google the default search system in Apple iPhones, the brand that has redefined search and retrieval to mean “pay to play,” what’s the big deal?

Years ago I explained in numerous speeches and articles in publications like Online Magazine that online fosters the creation of centralized monopolistic information services. Some information professionals dismissed my observation as stupid. The general response was that online would generate benefits. I agree. But there were a few downsides. I usually pointed to the duopoly in online for fee legal information. I referenced the American Chemical Society’s online service Chemical Abstracts. I even pointed out that outfits like Predicasts and the New York Times would have a very, very tough time creating profitable information centric standalone businesses. The centralization or magnetic pull of certain online services would make generating profits very expensive.

So where are we now? I read “Reddit, Google, and the Real Cost of the AI Data Rush.” The article is representative of “real” journalists’, pundits’, and some regulators’ understanding of online information. The write up says:

Google, like Reddit, owes its existence and success to the principles and practices of the open web, but exclusive arrangements like these mark the end of that long and incredibly fruitful era. They’re also a sign of things to come. The web was already in rough shape, reduced over the last 15 years by the rise of walled-off platforms, battered by advertising consolidation, and polluted by a glut of content from the AI products that used it for training. The rise of AI scraping threatens to finish the job, collapsing a flawed but enormously successful, decades-long experiment in open networking and human communication to a set of antagonistic contracts between warring tech firms.

I want to point out that Google bought rights to Reddit. If you want to search Reddit, you use Google. Because Reddit is a high traffic site, users have to use Google. Guess what? Most online users do not care. Search means Google. Information access means Google. Finding a restaurant means Google. Period.

Google has become a center of gravity in the online universe. One can argue that Google is the Internet. In my monograph Google Version 2.0: The Calculating Predator that is exactly what some Googlers envisioned for the firm. Once a user accesses Google, Google controls the information world. One can argue that Meta and TikTok are going to prevent that. Some folks suggest that one of the AI start ups will neutralize Google’s centralized gravitational force. Google is a distributed outfit. Think of it as like the background radiation in our universe. It is just there. Live with it.

Google has converted content created by people who were not compensated into zeros and ones that will enhance its magnetic pull on users.

Several observations:

  1. Users were so enamored of a service which could show useful results from the quite large and very disorganized pools of digital information that it sucked the life out of its competitors.
  2. Once a funding source got the message through to the Backrub boys that they had to monetize, the company obtained inspiration from the Yahoo pay to play model which Yahoo acquired from Overture.com, formerly GoTo.com. That pay to play thing produces lots of money when there is traffic. Google was getting traffic.
  3. Regulators ignored Google’s slow but steady march to information dominance. In fact, some regulatory professionals with whom I spoke thought Google was the cat’s pajamas and asked me if I could get them Google T shirts for their kids. Google was not evil; it was fund; it was success.
  4. Almost the entire world’s intelligence professionals relay on Google for OSINT. If you don’t know what that means, forget the term. Knowing the control Google can exert by filtering information on a topic will probably give you a tummy ache.

The future is going to look exactly like the world of online in the year 1980. Google and maybe a couple of smaller also rans will control access to digital information. To get advertising free and to have a shot at bias free answers to online queries, users will have to pay. The currency will be watching advertising or subscribing to a premium service. The business model of Dialog Information Services, SDC, DataStar, and Dialcom is coming back. The prices will inflate. Control of information will be easy. And shaping or weaponizing content flow from these next generation online services will be too profitable to resist. Learn to love Google. It is more powerful than a single country’s government. If a country gets too frisky for Google’s liking, the company has ways to evade issues that make it uncomfortable.

The cartoon in this blog post summarizes my view of the situation. A fix will take a long time. I will be pushing up petunias before the problems of online search and the Information Superhighway are remediated.

Stephen E Arnold, August 1, 2024

Every Cloud Has a Silver Lining: Cyber Security Software from Israel

August 1, 2024

dinosaur30a_thumb_thumb_thumb_thumb__thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I wonder if those lucky Delta passengers have made it to their destinations yet? The Crowdstrike misstep caused a bit of a problem for some systems and for humans too. I saw a notice that CrowdStrike, founded by a Russian I believe, offered $10 to each person troubled by the teenie tiny mistake. Isn’t that too much for something which cannot be blamed on any one person, just on an elusive machine-centric process that had a bad hair day? Why pay anything?

And there is a silver lining to the CrowdStrike cloud! I read “CrowdStrike’s Troubles Open New Doors for Israeli Cyber Companies.” [Note that this source document may be paywalled. Just a heads up, gentle reader.] The write up asserts:

For the Israeli cyber sector, CrowdStrike’s troubles are an opportunity.

Yep, opportunity.

The write up adds:

Friday’s [July 26, 2024] drop in CrowdStrike shares reflects investor frustration and the expectation that potential customers will now turn to competitors, strengthening the position of Israeli companies. This situation may renew interest in smaller startups and local procurement in Israel, given how many institutions were affected by the CrowdStrike debacle.

The write up uses the term platformization, which is a marketing concept of the Palo Alto Networks cyber security firm. The idea is that a typical company is a rat’s nest of cyber security systems. No one is able to keep the features, functions, and flaws of several systems in mind. When something misfires or a tiny stumble occurs, Mr. Chaos, the friend of every cyber security professional, strolls in and asks, “Planning on a fun weekend, folks?”

image

The sales person makes reality look different. Thanks, Microsoft Copilot. Your marketing would never distort anything, right?

Platformization sounds great. I am not sure that any cyber security magic wand works. My econo-box automobile runs, but I would not say, “It works.” I can ponder this conundrum as I wait for the mobile repair fellow to arrive and riding in an Uber back to my office in rural Kentucky. The rides are evidence that “just works” is not exactly accurate. Your mileage may vary.

I want to point out that the write up is a bit of content marketing for Palo Alto Networks. Furthermore, I want to bring up a point which irritates some of my friends; namely, the Israeli cyber security systems, infrastructure, and smart software did not work in October 2023. Sure, there are lots of explanations. But which is more of a problem? CrowdStrike or the ineffectiveness of multiple systems?

Your call. The solution to cyber issues resides in informed professionals, adequate resources like money, and a commitment to security. Assumptions, marketing lingo, and fancy trade show booths simply prove that overpromising and under delivering is standard operating procedure at this time.

Stephen E Arnold, August 1, 2024

A Reliability Test for General-Purpose AI

August 1, 2024

A team of researchers has developed a valuable technique: “How to Assess a General-Purpose AI Model’s Reliability Before It’s Deployed.” The ScienceDaily article begins by defining foundation models—the huge, generalized deep-learning models that underpin generative AI like ChatGPT and DALL-E. We are reminded these tools often make mistakes, and that sometimes these mistakes can have serious consequences. (Think self-driving cars.) We learn:

“To help prevent such mistakes, researchers from MIT and the MIT-IBM Watson AI Lab developed a technique to estimate the reliability of foundation models before they are deployed to a specific task. They do this by considering a set of foundation models that are slightly different from one another. Then they use their algorithm to assess the consistency of the representations each model learns about the same test data point. If the representations are consistent, it means the model is reliable. When they compared their technique to state-of-the-art baseline methods, it was better at capturing the reliability of foundation models on a variety of downstream classification tasks. Someone could use this technique to decide if a model should be applied in a certain setting, without the need to test it on a real-world dataset. This could be especially useful when datasets may not be accessible due to privacy concerns, like in health care settings. In addition, the technique could be used to rank models based on reliability scores, enabling a user to select the best one for their task.”

Great! See the write-up for the technical details behind the technique. This breakthrough can help companies avoid mistakes before they launch their products. That is, if they elect to use it. Will organizations looking to use AI for cost cutting go through these processes? Sadly, we suspect that, if costs go down and lawsuits are few and far between, the AI is deemed good enough. But thanks for the suggestion, MIT.

Cynthia Murrell, August 1, 2024

Google and Its Smart Software: The Emotion Directed Use Case

July 31, 2024

green-dino_thumb_thumb_thumb_thumb_t_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

How different are the Googlers from those smack in the middle of a normal curve? Some evidence is provided to answer this question in the Ars Technica article “Outsourcing Emotion: The Horror of Google’s “Dear Sydney” AI Ad.” I did not see the advertisement. The volume of messages flooding through my channels each days has allowed me to develop what I call “ad blindness.” I don’t notice them; I don’t watch them; and I don’t care about the crazy content presentation which I struggle to understand.

image

A young person has to write a sympathy card. The smart software is encouraging to use the word “feel.” This is a word foreign to the individual who wants to work for big tech someday. Thanks, MSFT Copilot. Do you have your hands full with security issues today?

Ars Technica watches TV and the Olympics. The write up reports:

In it, a proud father seeks help writing a letter on behalf of his daughter, who is an aspiring runner and superfan of world-record-holding hurdler Sydney McLaughlin-Levrone. “I’m pretty good with words, but this has to be just right,” the father intones before asking Gemini to “Help my daughter write a letter telling Sydney how inspiring she is…” Gemini dutifully responds with a draft letter in which the LLM tells the runner, on behalf of the daughter, that she wants to be “just like you.”

What’s going on? The father wants to write something personal to his progeny. A Hallmark card may never be delivered from the US to France. The solution is an emessage. That makes sense. Essential services like delivering snail mail are like most major systems not working particularly well.

Ars Technica points out:

But I think the most offensive thing about the ad is what it implies about the kinds of human tasks Google sees AI replacing. Rather than using LLMs to automate tedious busywork or difficult research questions, “Dear Sydney” presents a world where Gemini can help us offload a heartwarming shared moment of connection with our children.

I find the article’s negative reaction to a Mad Ave-type of message play somewhat insensitive. Let’s look at this use of smart software from the point of view of a person who is at the right hand tail end of the normal distribution. The factors in this curve are compensation, cleverness as measured in a Google interview, and intelligence as determined by either what school a person attended, achievements when a person was in his or her teens, or solving one of the Courant Institute of Mathematical Sciences brain teasers. (These are shared at cocktail parties or over coffee. If you can’t answer, you pay the bill and never get invited back.)

Let’s run down the use of AI from this hypothetical right of loser viewpoint:

  1. What’s with this assumption that a Google-type person has experience with human interaction. Why not send a text even though your co-worker is at the next desk? Why waste time and brain cycles trying to emulate a Hallmark greeting card contractor’s phraseology. The use of AI is simply logical.
  2. Why criticize an alleged Googler or Googler-by-the-gig for using the company’s outstanding, quantumly supreme AI system? This outfit spends millions on running AI tests which allow the firm’s smart software to perform in an optimal manner in the messaging department. This is “eating the dog food one has prepared.” Think of it as quality testing.
  3. The AI system, running in the Google Cloud on Google technology is faster than even a quantumly supreme Googler when it comes to generating feel-good platitudes. The technology works well. Evaluate this message in terms of the effectiveness of the messaging generated by Google leadership with regard to the Dr. Timnit Gebru matter. Upper quartile of performance which is far beyond the dead center of the bell curve humanoids.

My view is that there is one positive from this use of smart software to message a partially-developed and not completely educated younger person. The Sundar & Prabhakar Comedy Act has been recycling jokes and bits for months. Some find them repetitive. I do not. I am fascinated by the recycling. The S&P Show has its fans just as Jack Benny does decades after his demise. But others want new material.

By golly, I think the Google ad showing Google’s smart software generating a parental note is a hoot and a great demo. Plus look at the PR the spot has generated.

What’s not to like? Not much if you are Googley. If you are not Googley, sorry. There’s not much that can be done except shove ads at you whenever you encounter a Google product or service. The ad illustrates the mental orientation of Google. Learn to love it. Nothing is going to alter the trajectory of the Google for the foreseeable future. Why not use Google’s smart software to write a sympathy note to a friend when his or her parent dies? Why not use Google to write a note to the dean of a college arguing that your child should be admitted? Why not let Google think for you? At least that decision would be intentional.

Stephen E Arnold, July 31, 2024

How

How

How

How

How

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta