Want Clicks: Do Sad, Really, Really Sorrowful

March 13, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The US is a hotbed of negative news. It’s what drives the media and perpetuates the culture of fear that (arguably) has plagued the country since colonial times. US citizens and now the rest of the world are so addicted to bad news that a research team got the brilliant idea to study what words people click. Nieman Lab wrote about the study in, “Negative Words In News Headlines Generate More Clicks-But Sad Words Are More Effective Than Angry Or Scary Ones.”

image

Thanks, MSFT Copilot. One of Redmond’s security professionals I surmise?

Negative words are prevalent in headlines because they sell clicks. The Nature Human Behavior(u)r journal published a study called “Negativity Drives Online News Consumption.” The study analyzed the effect of negative and emotional words on news consumption and the research team discovered that negativity increased clickability. These findings also confirm the well-documented behavior of humans seeking negativity in all information-seeking.

It coincides with humanity’s instinct to be vigilant of any danger and avoid it. While humans instinctually gravitate towards negative headlines, certain negative words are more popular than others. Humans apparently are driven to click on sad-related synonyms, avoid anything resembling joy or fear, and angry words don’t have any effect. It all goes back to survival:

“And if we are to believe “Bad is stronger than good” derives from evolutionary psychology — that it arose as a useful heuristic to detect threats in our environment — why would fear-related words reduce likelihood to click? (The authors hypothesize that fear and anger might be more important in generating sharing behavior — which is public-facing — than clicks, which are private.)

In any event, this study puts some hard numbers to what, in most newsrooms, has been more of an editorial hunch: Readers are more drawn to negativity than to positivity. But thankfully, the effect size is small — and I’d wager that it’d be even smaller for any outlet that decided to lean too far in one direction or the other.”

It could also be a strict diet of danger-filled media too.

Whitney Grace, March 13, 2024

In Tech We Mistrust

March 11, 2024

While tech firms were dumping billions into AI, they may have overlooked one key component: consumer faith. The Hill reports, “Trust in AI Companies Drops to 35 Percent in New Study.” We note that 35% figure is for the US only, while the global drop was a mere 8%. Still, that is the wrong direction for anyone with a stake in the market. So what is happening? Writer Filip Timotija tells us:

So it is not just AI we mistrust, it is tech companies as a whole. That tracks. The study polled 32,000 people across 28 countries. Timotija reminds us regulators in the US and abroad are scrambling to catch up. Will fear of consumer rejection do what neither lagging lawmakers nor common decency can? The write-up notes:

“Westcott argued the findings should be a ‘wake up call’ for AI companies to ‘build back credibility through ethical innovation, genuine community engagement and partnerships that place people and their concerns at the heart of AI developments.’ As for the impacts on the future for the industry as a whole, ‘societal acceptance of the technology is now at a crossroads,’ he said, adding that trust in AI and the companies producing it should be seen ‘not just as a challenge, but an opportunity.’” “Multiple factors contributed to the decline in trust toward the companies polled in the data, according to Justin Westcott, Edelman’s chair of global technology. ‘Key among these are fears related to privacy invasion, the potential for AI to devalue human contributions, and apprehensions about unregulated technological leaps outpacing ethical considerations,’ Westcott said, adding ‘the data points to a perceived lack of transparency and accountability in how AI companies operate and engage with societal impacts.’ Technology as a whole is losing its lead in trust among sectors, Edelman said, highlighting the key findings from the study. ‘Eight years ago, technology was the leading industry in trust in 90 percent of the countries we study,’ researchers wrote, referring to the 28 countries. ‘Now it is most trusted only in half.’”

Yes, an opportunity. All AI companies must do is emphasize ethics, transparency, and societal benefits over profits. Surely big tech firms will get right on that.

Cynthia Murrell, March 11, 2024

Google Gems: 21 February 2024

February 21, 2024

Saint Valentine’s Day week bulged with love and kisses from the Google. If I recall what I learned at Duquesne University, Father Valentine was a martyr and checked into heaven in the 3rd century BCE. Figuring out the “real” news about Reverendissimo Padre is not easy, particularly with the advertising-supported Google search. Thus, it is logical that Google would have been demonstrating its love for its “users” with announcements, insights, and news as tokens of affection. I am touched. Let’s take a look at a selected run down of love bonbons.

THE BIG STORY

The Beyond Search team agreed that the big story is part marketing and part cleverness. The Microsofties said that old PCs would become door stops. Millions of Windows users with “old” CPUs and firmware will not work with future updates to Windows. What did Google do? The company announced that it would allow users to use the Chrome OS and continue computing with Google services and features. You can get some details in a Reuters’ story.

1 6 24 gelms

Thanks, MSFT Copilot OpenAI.

AN AMAZING STORY IF ACCURATE

Wired Magazine reported that Google wants to allow its “users” to talk to “live agents.” Does this mean smart software which are purported to be alive or to actual humans (who, one hopes, speak reasonably good English or other languages like Kallawaya.

MANAGEMENT MOVES

I find Google’s management methods fascinating. I like to describe the method as similar to that used by my wildly popular high school science club. Google did not disappoint.

The Seattle Times reports that Google has made those in its Seattle office chilly. You can read about those cutback at this link. Google is apparently still refining its termination procedures.

A Xoogler provided a glimpse of the informed, ethical, sensitive, and respectful tactics Google used when dealing with “real” news organizations. I am not sure if the word “arrogance” is appropriate. It is definitely quite a write up and provides an X-ray of Google’s management precepts in action. You can find the paywalled write up at this link. For whom are the violins playing?

Google’s management decision to publish a report about policeware appears to have forced one vendor of specialized software to close up shop. If you want information about the power of Google’s “analysis and PR machine” navigate to this story.

LITIGATION

New York City wants to sue social media companies for negligence. The Google is unlikely to escape the Big Apple’s focus on the now-noticeable impacts of skipping “real” life for the scroll world. There’s more about this effort in Axios at this link.

An Australian firm has noted that Google may be facing allegations of patent infringement. More about this matter will appear in Beyond Search.

The Google may be making changes to try an ameliorate EU legal action related to misinformation. A flurry of Xhitter posts reveal some information about this alleged effort.

Google seems to be putting a “litigation fence” in place. In an effort to be a great outfit, “Google Launches €25M AI Drive to Empower Europe’s Workforce.” The NextWeb story reports:

The initiative is targeted at “vulnerable and underserved” communities, who Google said risk getting left behind as the use of AI in the workplace skyrockets — a trend that is expected to continue. Google said it had opened applications for social enterprises and nonprofits that could help reach those most likely to benefit from training.  Selected organizations will receive “bespoke and facilitated” training on foundational AI.

Could this be a tactic intended to show good faith when companies terminate employees because smart software like Google’s put individuals out of a job?

INNOVATION

The Android Police report that Google is working on a folding phone. “The Pixel Fold 2’s Leaked Redesign Sees Google Trading Originality for a Safe Bet” explains how “safe” provides insight into the company’s approach to doing “new” things. (Aren’t other mobile phone vendors dropping this form factor?) Other product and service tweaks include:

  1. Music Casting gets a new AI. Read more here.
  2. Google thinks it can imbue self reasoning into its smart software. The ArXiv paper is here.
  3. Gemini will work with headphones in more countries. A somewhat confusing report is at this link.
  4. Forbes, the capitalist tool, is excited that Gmail will have “more” security. The capitalist tool’s perspective is at this link.
  5. Google has been inspired to emulate the Telegram’s edit recent sends. See 9 to 5 Google’s explanation here.
  6. Google has released Goose to help its engineers write code faster. Will these steps lead to terminating less productive programmers?

SMART SOFTWARE

Google is retiring Bard (which some pundits converted to the unpleasant word “barf”). Behold Gemini. The news coverage has been the digital equivalent of old-school carpet bombing. There are many Gemini items. Some have been pushed down in the priority stack because OpenAI rolled out its text to video features which were more exciting to the “real” journalists. If you want to learn about Gemini, its zillion token capability, and the associated wonderfulness of the system, navigate to “Here’s Everything You Need to Know about Gemini 1.5, Google’s Newly Updated AI Model That Hopes to Challenge OpenAI.” I am not sure the article covers “everything.” The fact that Google rolled out Gemini and then updated it in a couple of days struck me as an important factoid. But I am not as informed as Yahoo.

Another AI announcement was in my heart shaped box of candy. Google’s AI wizards made PIVOT public. No, pivot is not spinning; it is Prompting with Iterative Visual Optimization. You can see the service in action in “PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs.” My hunch is that PIVOT was going to knock OpenAI off its PR perch. It didn’t. Plus, there is an ArXiv paper authored by Nasiriany, Soroush and Xia, Fei and Yu, Wenhao and Xiao, Ted and Liang, Jacky and Dasgupta, Ishita and Xie, Annie and Driess, Danny and Wahid, Ayzaan and Xu, Zhuo and Vuong, Quan and Zhang, Tingnan and Lee, Tsang-Wei Edward and Lee, Kuang-Huei and Xu, Peng and Kirmani, Sean and Zhu, Yuke and Zeng, Andy and Hausman, Karol and Heess, Nicolas and Finn, Chelsea and Levine, Sergey and Ichter, Brian at this link. But then there is that OpenAI Sora, isn’t there?

Gizmodo’s content kitchen produced a treat which broke one of Googzilla’s teeth. The article “Google and OpenAI’s Chatbots Have Almost No Safeguards against Creating AI Disinformation for the 2024 Presidential Election” explains that Google like other smart software outfits are essentially letting “users” speed down an unlit, unmarked, unpatrolled Information Superhighway.

Business Insider suggests that the Google “Wingman” (like a Copilot. Get the word play?) may cause some people to lose their jobs. Did this just happen in Google’s Seattle office? The “real” news outfit opined that AI tools like Google’s wingman whips up concerns about potential job displacement. Well, software is often good enough and does not require vacations, health care, and effective management guidance. That’s the theory.

Stephen E Arnold, February 21, 2024

Googzilla Takes Another OpenAI Sucker Punch

February 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In January 2023, the savvy Googlers woke up to news that Microsoft and OpenAI had seized the initiative in smart software. One can argue the technical merits, but from a PR and marketing angle, the Softies and Sam AI-Man crept upon the World Economic Forum and clubbed the self-confident Googzilla in the cervical spine. The Google did not see that coming.

The somewhat quirky OpenAI has done it again. This time the blow was delivered with a kin geri or, more colloquially, a groin kick. How did Sam AI-Man execute this painful strike? Easy. The company released Sora, a text to video smart software function. “OpenAI’s Sora Generates Photorealistic Videos” reports:

Sora is a generative AI diffusion model. Sora can generate multiple characters, complex backgrounds and realistic-looking movements in videos up to a minute long. It can create multiple shots within one video, keeping the characters and visual style consistent, allowing Sora to be an effective storytelling tool.

Chatter indicates that OpenAI is not releasing a demonstration or a carefully crafted fakey examples. Nope, unlike a certain large outfit with a very big bundle of cash, the OpenAI experts have skipped the demonstrations and gone directly to a release of the service to individuals who will probe the system for safety and good manners.

Could Googzilla be the company which OpenAI intends to drop to its knees? From my vantage point, heck yes. The outputs from the system are not absolutely Hollywood grade, but the examples are interesting and suggest that the Google, when it gets up off the floor, will have to do more.

image

Several observations:

  1. OpenAI is doing a good job with its marketing and PR. Google announces quantum supremacy; OpenAI provides a glimpse of a text to video function which will make game developers, Madison Avenue art history majors, and TikTok pay attention
  2. Google is once again in react mode. I am not sure pumping up the number of tokens in Bard or Gemini or whatever is going to be enough to scrub the Sora and prevent the spread of this digital infection
  3. Googzilla may be like the poor 1950s movie monster who was tamed not by a single blow but by many pesky attacks. I think this approach is called “death by a thousand cuts.”

Net net: OpenAI has pulled up a marketing coup for a second time. Googzilla is ageing, and old often means slow. What is OpenAI’s next marketing play? A Bruce Lee “I am faster than you, big guy” or a Ninja stealth move? Both methods seem to have broken through the GOOG’s defenses.

Stephen E Arnold, February 19, 2024

x

Topicfinder and Its List of Free PR Sites

February 14, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I noted “40+ Free Sites to Post a Company’s Press Release (Updated).” The “news” is that the list has been updated. What makes this list interesting to penny-pinching marketers is that the sites are “free.” However, it is a good idea to read about each site’s options and terms of service.

image

Free can be a powerful magnet. Thanks Google Bard or Gemini or AI Test Kitchen whatever.

The listing is broken into four categories:

  1. The free press release submission list. The sites listed have registration and review processes for obvious reasons; namely, promoting illegal products and services and other content which can spark litigation or retribution. A short annotation accompanies each item.
  2. A list of “niche” free press release sites. The idea is that some free services want a certain type of content; for example, a technical slant or tourist content.
  3. A list of sites which now charge for press release distribution.
  4. A list of dead press release distribution sites.

Is the list comprehensive? No. Plus, release aggregation sites like Newswise are not included.

Several suggestions:

  1. The lists do not include the sometimes “interesting” outfits operating on the margins of the marketing world. One example we researched was the outfit doing business as the icrowdnewswire.
  2. For fee services are useful because a number of these firms have “relationships” with major search engines so that placement is allegedly “guaranteed.” Examples include PRUnderground, Benzinga, and others.
  3. The press release service may not offer a “forever archive”; that is, the press release content is disappeared to either save money or because old content is deemed to have zero click value to the distribution shop.

If you want to give “free” press releases a whirl, Topicfinder’s listing may be a useful starting point. OSINT experts may find some content gems pushed out from these services. Adding these to a watch list may be useful.

Keep in mind that once one registers, a bit of AI orchestration and some ChatGPT-type magic can create a news release blaster. Posting releases one-by-one is very yesterday.

Stephen E Arnold, February 14, 2024

Sales SEO: A New Tool for Hype and Questionable Relevance

February 5, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Search engine optimization is a relevance eraser. Now SEO has arrived for a human. “Microsoft Copilot Can Now Write the Sales Pitch of a Lifetime” makes clear that hiring is going to become more interesting for both human personnel directors (often called chief people officers) and AI-powered résumé screening systems. And for people who are responsible for procurement, figuring out when a marketing professional is tweaking the truth and hallucinating about a product or service will become a daily part of life… in theory.

image

Thanks for the carnival barker image, MSFT Copilot Bing thing. Good enough. I love the spelling of “asiractson”. With workers who may not be able to read, so what? Right?

The write up explains:

Microsoft Copilot for Sales uses specific data to bring insights and recommendations into its core apps, like Outlook, Microsoft Teams, and Word. With Copilot for Sales, users will be able to draft sales meeting briefs, summarize content, update CRM records directly from Outlook, view real-time sales insights during Teams calls, and generate content like sales pitches.

The article explains:

… Copilot for Service for Service can pull in data from multiple sources, including public websites, SharePoint, and offline locations, in order to handle customer relations situations. It has similar features, including an email summary tool and content generation.

Why is MSFT expanding these interesting functions? Revenue. Paying extra unlocks these allegedly remarkable features. Prices range from $240 per year to a reasonable $600 per year per user. This is a small price to pay for an employee unable to craft solutions that sell, by golly.

Stephen E Arnold, February 5, 2024

Search Market Data: One Click to Oblivion Is Baloney, Mr. Google

January 24, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Do you remember the “one click away” phrase. The idea was and probably still is in the minds of some experts that any user can change search engines with a click. (Eric Schmidt, the adult once in charge of the Google) also suggested that he is kept awake at night worrying about Qwant. I know? Qwant what?

image“I have all the marbles,” says the much loved child. Thanks, MSFT second string Copilot Bing thing. Good enough.

I read an interesting factoid. I don’t know if the numbers are spot on, but the general impression of the information lines up with what my team and I have noted for decades. The relevance champions at Search Engine Roundtable published “Report: Bing Gained Less Than 1% Market Share Since Adding Bing Chat.”

Here’s a passage I found interesting:

Bloomberg reported on the StatCounter data, saying, “But Microsoft’s search engine ended 2023 with just 3.4% of the global search market, according to data analytics firm StatCounter, up less than 1 percentage point since the ChatGPT announcement.”

There’s a chart which shows Google’s alleged 91.6 percent Web search market share. I love the precision of a point six, don’t you? The write up includes a survey result suggesting that Bing would gain more market share.

Yeah, one click away. Oh, Qwant.com is still on line at https://www.qwant.com/. Rest easy, Google.

Stephen E Arnold, January 24, 2024

IBM Charges Toward Consulting Services: Does Don Quixote Work at Big Blue?

January 23, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

It is official. IBM consultants will use smart software to provide answers to clients. Why not ask the smart software directly and skip the consultants? Why aren’t IBM consultants sufficiently informed and intelligent to answer a client’s questions directly? Is IBM admitting that its consultants lack the knowledge depth and insight necessary to solve a client’s problems? Hmmm.

IBM Introduces IBM Consulting Advantage, an AI Services Platform and Library of Assistants to Empower Consultants” asserts in corporate marketing lingo:

IBM Consulting Assistants are accessed through an intuitive conversational interface powered by IBM Watsonx, IBM’s AI and data platform. Consultants can toggle across multiple IBM and third-party generative AI models to compare outputs and select the right model for their task, and use the platform to rapidly build and share prompts and pre-trained assistants across teams or more widely across the consulting organization. The interface also enables easy uploading of project-specific documents for rapid insights that can then be shared into common business tools.

One of the key benefits of using smart software is to allow the IBM consultants to do more in the same billable hour. Thus, one can assume that billable hours will go up. “Efficiency” may not equate to revenue generation if the AI-assisted humanoids deliver incorrect, off-point, or unverifiable outputs.

image

A winner with a certain large company’s sure fire technology. Thanks, MSFT second string Copilot Bing thing. Good enough.

What can the AI-turbo charged system do? A lot. Here’s what IBM marketing asserts:

The IBM Consulting Advantage platform will be applied across the breadth of IBM Consulting’s services, spanning strategy, experience, technology and operations. It is designed to work in combination with IBM Garage, a proven, collaborative engagement model to help clients fast-track innovation, realize value three times faster than traditional approaches, and transparently track business outcomes. Today’s announcement builds on IBM Consulting’s concrete steps in 2023 to further expand its expertise, tools and methods to help accelerate clients’ business transformations with enterprise-grade AI…. IBM Consulting helps accelerate business transformation for our clients through hybrid cloud and AI technologies, leveraging our open ecosystem of partners. With deep industry expertise spanning strategy, experience design, technology, and operations, we have become the trusted partner to many of the world’s most innovative and valuable companies, helping modernize and secure their most complex systems. Our 160,000 consultants embrace an open way of working and apply our proven, collaborative engagement model, IBM Garage, to scale ideas into outcomes.

I have some questions; for example:

  1. Will IBM hire less qualified and less expensive humans, assuming that smart software lifts them up to super star status?
  2. Will the system be hallucination proof; that is, what procedure ensures that decisions based on smart software assisted outputs are based on factual, reliable information?
  3. When a consulting engagement goes off the rails, how will IBM allocate responsibility; for example, 100 percent to the human, 50 percent to the human and 50 percent to those who were involved in building the model, or 100 percent to the client since the client made a decision and consultants just provide options and recommendations?

I look forward to IBM Watsonx’s revolutionizing consulting related to migrating COBOL from a mainframe to a hybrid environment relying on a distributed network with diverse software. Will WatsonX participate in Jeopardy again?

Stephen E Arnold, January 23, 2024

Cyber Security Software and AI: Man and Machine Hook Up

January 8, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My hunch is that 2024 is going to be quite interesting with regards to cyber security. The race among policeware vendors to add “artificial intelligence” to their systems began shortly after Microsoft’s ChatGPT moment. Smart agents, predictive analytics coupled to text sources, real-time alerts from smart image monitoring systems are three application spaces getting AI boosts. The efforts are commendable if over-hyped. One high-profile firm’s online webinar presented jargon and buzzwords but zero evidence of the conviction or closure value of the smart enhancements.

image

The smart cyber security software system outputs alerts which the system manager cannot escape. Thanks, MSFT Copilot Bing thing. You produced a workable illustration without slapping my request across my face. Good enough too.

Let’s accept as a working presence that everyone from my French bulldog to my neighbor’s ex wife wants smart software to bring back the good old, pre-Covid, go-go days. Also, I stipulate that one should ignore the fact that smart software is a demonstration of how numerical recipes can output “good enough” data. Hallucinations, errors, and close-enough-for-horseshoes are part of the method. What’s the likelihood the door of a commercial aircraft would be removed from an aircraft in flight? Answer: Well, most flights don’t lose their doors. Stop worrying. Those are the rules for this essay.

Let’s look at “The I in LLM Stands for Intelligence.” I grant the title may not be the best one I have spotted this month, but here’s the main point of the article in my opinion. Writing about automated threat and security alerts, the essay opines:

When reports are made to look better and to appear to have a point, it takes a longer time for us to research and eventually discard it. Every security report has to have a human spend time to look at it and assess what it means. The better the crap, the longer time and the more energy we have to spend on the report until we close it. A crap report does not help the project at all. It instead takes away developer time and energy from something productive. Partly because security work is consider one of the most important areas so it tends to trump almost everything else.

The idea is that strapping on some smart software can increase the outputs from a security alerting system. Instead of helping the overworked and often reviled cyber security professional, the smart software makes it more difficult to figure out what a bad actor has done. The essay includes this blunt section heading: “Detecting AI Crap.” Enough said.

The idea is that more human expertise is needed. The smart software becomes a problem, not a solution.

I want to shift attention to the managers or the employee who caused a cyber security breach. In what is another zinger of a title, let’s look at this research report, “The Immediate Victims of the Con Would Rather Act As If the Con Never Happened. Instead, They’re Mad at the Outsiders Who Showed Them That They Were Being Fooled.” Okay, this is the ostrich method. Deny stuff by burying one’s head in digital sand like TikToks.

The write up explains:

The immediate victims of the con would rather act as if the con never happened. Instead, they’re mad at the outsiders who showed them that they were being fooled.

Let’s assume the data in this “Victims” write up are accurate, verifiable, and unbiased. (Yeah, I know that is a stretch.)

What do these two articles do to influence my view that cyber security will be an interesting topic in 2024? My answers are:

  1. Smart software  will allegedly detect, alert, and warn of “issues.” The flow of “issues” may overwhelm or numb staff who must decide what’s real and what’s a fakeroo. Burdened staff can make errors, thus increasing security vulnerabilities or missing ones that are significant.
  2. Managers, like the staffer who lost a mobile phone, with company passwords in a plain text note file or an email called “passwords” will blame whoever blows the whistle. The result is the willful refusal to talk about what happened, why, and the consequences. Examples range from big libraries in the UK to can kicking hospitals in a flyover state like Kentucky.
  3. Marketers of remediation tools will have a banner year. Marketing collateral becomes a closed deal making the art history majors writing copy secure in their job at a cyber security company.

Will bad actors pay attention to smart software and the behavior of senior managers who want to protect share price or their own job? Yep. Close attention.

Stephen E Arnold, January 8, 2024

THE I IN LLM STANDS FOR INTELLIGENCE

xx

x

x

x

x

x

IBM: AI Marketing Like It Was 2004

January 5, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required. Note: The word “dinobaby” is — I have heard — a coinage of IBM. The meaning is an old employee who is no longer wanted due to salary, health care costs, and grousing about how the “new” IBM is not the “old” IBM. I am a proud user of the term, and I want to switch my tail to the person who whipped up the word.

What’s the future of AI? The answer depends on whom one asks. IBM, however, wants to give it the old college try and answer the question so people forget about the Era of Watson. There’s a new Watson in town, or at least, there is a new Watson at the old IBM url. IBM has an interesting cluster of information on its Web site. The heading is “Forward Thinking: Experts Reveal What’s Next for AI.”

IBM crows that it “spoke with 30 artificial intelligence visionaries to learn what it will take to push the technology to the next level.” Five of these interviews are now available on the IBM Web site. My hunch is that IBM will post new interviews, hit the new release button, post some links on social media, and then hit the “Reply” button.

image

Can IBM ignite excitement and capture the revenues it wants from artificial intelligence? That’s a good question, and I want to ask the expert in the cartoon for an answer. Unfortunately only customers and their decisions matter for AI thought leaders unless the intended audience is start ups, professors, and employees. Thanks, MSFT Copilot Bing thing. Good enough.

As I read the interviews, I thought about the challenge of predicting where smart software would go as it moved toward its “what’s next.” Here’s a mini-glimpse of what the IBM visionaries have to offer. Note that I asked Microsoft’s smart software to create an image capturing the expert sitting in an office surrounded by memorabilia.

Kevin Kelly (the author of What Technology Wants) says: “Throughout the business world, every company these days is basically in the data business and they’re going to need AI to civilize and digest big data and make sense out of it—big data without AI is a big headache.” My thought is that IBM is going to make clear that it can help companies with deep pockets tackle these big data and AI them. Does AI want something, or do those trying to generate revenue want something?

Mark Sagar (creator of BabyX) says: “We have had an exponential rise in the amount of video posted online through social media, etc. The increased use of video analysis in conjunction with contextual analysis will end up being an extremely important learning resource for recognizing all kinds of aspects of behavior and situations. This will have wide ranging social impact from security to training to more general knowledge for machines.” Maybe IBM will TikTok itself?

Chieko Asakawa (an unsighted IBM professional) says: “We use machine learning to teach the system to leverage sensors in smartphones as well as Bluetooth radio waves from beacons to determine your location. To provide detailed information that the visually impaired need to explore the real world, beacons have to be placed between every 5 to 10 meters. These can be built into building structures pretty easily today.” I wonder if the technology has surveillance utility?

Yoshua Bengio (seller of an AI company to ServiceNow) says: “AI will allow for much more personalized medicine and bring a revolution in the use of large medical datasets.” IBM appears to have forgotten about its Houston medical adventure and Mr. Bengio found it not worth mentioning I assume.

Margaret Boden (a former Harvard professor without much of a connection to Harvard’s made up data and administrative turmoil) says: “Right now, many of us come at AI from within our own silos and that’s holding us back.” Aren’t silos necessary for security, protecting intellectual property, and getting tenure? Probably the “silobreaking” will become a reality.

Several observations:

  1. IBM is clearly trying hard to market itself as a thought leader in artificial intelligence. The Jeopardy play did not warrant a replay.
  2. IBM is spending money to position itself as a Big Dog pulling the AI sleigh. The MIT tie up and this AI Web extravaganza are evidence that IBM is [a] afraid of flubbing again, [b] going to market its way to importance, [c] trying to get traction as outfits like OpenAI, Mistral, and others capture attention in the US and Europe.
  3. IBM’s ability to generate awareness of its thought leadership in AI underscores one of the challenges the firm faces in 2024.

Net net: The company that coined the term “dinobaby” has its work cut out for itself in my opinion. Is Jeopardy looking like a channel again?

Stephen E Arnold, January 5, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta