Canada Bill C-18 Delivers a Victory: How Long Will the Triumph Pay Off in Cash Money?

June 23, 2023

News outlets make or made most of their money selling advertising. The idea was — when I worked at a couple of big news publishing companies — the audience for the content would attract those who wanted to reach the audience. I worked at the Courier-Journal & Louisville Times Co. before it dissolved into a Gannett marvel. If a used car dealer wanted to sell a 1980 Corvette, the choice was the newspaper or a free ad in what was called AutoTrader. This was a localized, printed collection of autos for sale. Some dealers advertised, but in the 1980s, individuals looking for a cheap or free way to pitch a vehicle loved AutoTrader. Despite a free option, the size of the readership and the sports news, comics, and obituaries made the Courier-Journal the must-have for a motivated seller.

6 23 cannae

Hannibal and his war elephant Zuckster survey the field of battle after Bill C-18 passes. MidJourney was the digital wonder responsible for this confection.

When I worked at the Ziffer in Manhattan, we published Computer Shopper. The biggest Computer Shopper had about 800 pages. It could have been bigger, but there were paper and press constraints If I recall correctly. But I smile when I remember that 85 percent of those pages were paid advertisements. We had an audience, and those in the burgeoning computer and software business wanted to reach our audience. How many Ziffers remember the way publishing used to work?

When I read the National Post article titled “Meta Says It’s Blocking News on Facebook, Instagram after Government Passes Online News Bill,” I thought about the Battle of Cannae. The Romans had the troops, the weapons, and the psychological advantage. But Hannibal showed up and, if historical records are as accurate as a tweet, killed Romans and mercenaries. I think it may have been estimated that Roman whiz kids lost 40,000 troops and 5,000 cavalry along with the Roman strategic wizards Paulus, Servilius, and Atilius.

My hunch is that those who survived paid with labor or money to be allowed to survive. Being a slave in peak Rome was a dicey gig. Having a fungible skill like painting zowie murals was good. Having minimal skills? Well, someone has to work for nothing in the fields or quarries.

What’s the connection? The publishers are similar to the Roman generals. The bad guys are the digital rebels who are like Hannibal and his followers.

Back to the cited National Post article:

After the Senate passed the Online News Act Thursday, Meta confirmed it will remove news content from Facebook and Instagram for all Canadian users, but it remained unclear whether Google would follow suit for its platforms.  The act, which was known as Bill C-18, is designed to force Google and Facebook to share revenues with publishers for news stories that appear on their platforms. By removing news altogether, companies would be exempt from the legislation.

The idea is that US online services which touch most online users (maybe 90 or 95 percent in North America) will block news content. This means:

  1. Cash gushers from Facebook- and Google-type companies will not pay for news content. (This has some interesting downstream consequences but for this short essay, I want to focus on the “not paying” for news.)
  2. The publishers will experience a decline in traffic. Why? Without a “finding and pointing” mechanism, how would I find this “real news” article published by the National Post. (FYI: I think of this newspaper as Canada’s USAToday, which was a Gannett crown jewel. How is that working out for Gannett today?)
  3. Rome triumphed only to fizzle out again. And Hannibal? He’s remembered for the elephants-through-the-Alps trick. Are man’s efforts ultimately futile?

What happens if one considers, the clicks will stop accruing to the publishers’ Web sites. How will the publishers generate traffic? SEO. Yeah, good luck with that.

Is there an alternative?

Yes, buy Facebook and Google advertising. I call this pay to play.

The Canadian news outlets will have to pay for traffic. I suppose companies like Tyler Technologies, which has an office in Vancouver I think, could sell ads for the National Post’s stories, but that seems to be a stretch. Similarly the National Post could buy ads on the Embroidery Classics & Promotions (Calgary) Web site, but that may not produce too many clicks for the Canadian news outfits. I estimate one or two a month.

Bill C-18 may not have the desired effect. Facebook and Facebook-type outfits will want to sell advertising to the Canadian publishers in my opinion. And without high-impact, consistent and relevant online advertising, state-of-art marketing, and juicy content, the publishers may find themselves either impaled on their digital hopes or placed in servitude to the Zuck and his fellow travelers.

Are these publishers able to pony up the cash and make the appropriate decisions to generate revenues like the good old days?

Sure, there’s a chance.

But it’s a long shot. I estimate the chances as similar to King Charles’ horse winning the 2024 King George V Stakes race in 2024; that is, 18 to 1. But Desert Hero pulled it off. Who is rooting for the Canadian publishers?

Stephen E Arnold, June 23, 2023

High School Redux: Dust Up in the Science Club

June 22, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

One cannot make up certain scenarios. Let me illustrate.

Navigate to “Google Accuses Microsoft of Anticompetitive Cloud Practices in Complaint to FTC.” You will have to pony up to read the article. The main point is that the Google “filed a complaint to the U.S. Federal Trade Commission.” Why? Microsoft is acting in an unfair manner. Is the phrase “Holy cow” applicable. Two quasi or at least almost monopolies are at odds. Amazing.

6 22 high schoool fight

MidJourney’s wealth of originality produced this image of two adolescents threatening one another. Is the issue a significant other? A dented bicycle? A solution to a tough math problem like those explained by PreMath? Nope. The argument is about more weighty matters: Ego. Will one of these mature wizards call their mom? A more likely outcome is to let loose a flurry of really macho legal eagles and/or a pride of PR people.

But the next item is even more fascinating. Point your click monitoring, data sucking browser at “Send Me Location: Mark Zuckerberg Says He’s Down to Fight Elon Musk in a Cage Match.” Visualize if you will Elon Musk and Mark Zuckerberg entering the ring at a Streetbeefs’ venue. The referee is the ever-alert Anomaly. Scarface is in the ring just in case some real muscle is needed to separate the fighters.

Let’s step back: Google wants to be treated fairly because Microsoft is using its market power to make sure the Google is finding it difficult to expand its cloud business. What’s the fix? Google goes to court. Yeah, bold. What about lowering prices, improving service, and providing high value functionality? Nah, just go to court. Is this like two youngsters arguing in front of their lockers and one of them telling the principal that Mr. Softie is behaving badly.

And the Musk – Zuckerberg drama? An actual physical fight? No proxies. Just no-holds-barred fisticuffs? Apparently that’s the implication of the cited story. That social media territory is precious by golly.

Several observations:

  1. Life is surprising
  2. Alleged techno-giants are oblivious to the concept of pettiness
  3. Adolescent behavior, not sophisticated management methods, guide certain firms.

Okay, ChatGPT, beat these examples for hallucinatory content. Not even smart software can out-think how high school science club members process information and behave in front of those not in the group.

Stephen E Arnold, June 22, 2023

News Flash about SEO: Just 20 Years Too Late but, Hey, Who Pays Attention?

June 21, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read an article which would have been news a couple of decades ago. But I am a dinobaby (please, see anigif bouncing in an annoying manner) and I am hopelessly out of touch with what “real news” is.

6 16 unhappy woman

An entrepreneur who just learned that in order to get traffic to her business Web site, she will have to spend big bucks and do search engine optimization, make YouTube videos (long and short), and follow Google’s implicit and explicit rules. Sad, MBA, I believe. The Moping Mistress of the Universe is a construct generated by the ever-innovative MidJourney and its delightful Discord interface.

The write up catching my attention is — hang on to your latte — “A Storefront for Robots: The SEO Arms Race Has Left Google and the Web Drowning in Garbage Text, with Customers and Businesses Flailing to Find Each Other.” I wondered if the word “flailing” is a typographic error or misspelling of “failing.” Failing strikes me as a more applicable word.

The thesis of the write up is that the destruction of precision and recall as useful for relevant online search and retrieval is not part of the Google game plan.

The write up asserts:

The result is SEO chum produced at scale, faster and cheaper than ever before. The internet looks the way it does largely to feed an ever-changing, opaque Google Search algorithm. Now, as the company itself builds AI search bots, the business as it stands is poised to eat itself.

Ah, ha. Garbage in, garbage out! Brilliant. The write up is about 4,000 words and makes clear that ecommerce requires generating baloney for Google.

To sum up, if you want traffic, do search engine optimization. The problem with the write up is that it is incorrect.

Let me explain. Navigate to “Google Earned $10 Million by Allowing Misleading Anti-Abortion Ads from Fake Clinics, Report Says.” What’s the point of this report? The answer is, “Google ads.” And money from a controversial group of supporters and detractors. Yes! An arms race of advertising.

Of course, SEO won’t work. Why would it? Google’s business is selling advertising. If you don’t believe me, just go to a conference and ask any Googler — including those wearing Ivory Tower Worker” pins — and ask, “How important is Google’s ad business?” But you know what most Googlers will say, don’t you?

For decades, Google has cultivated the SEO ploy for one reason. Failed SEO campaigns end up one place, “Google Advertising.”

Why?

If you want traffic, like the abortion ad buyers, pony up the cash. The Google will punch the Pay to Play button, and traffic results. One change kicked in after 2006. The mom-and-pop ad buyers were not as important as one of the “brand” advertisers. And what was that change? Small advertisers were left to the SEO experts who could then sell “small” ad campaigns when the hapless user learned that no one on the planet could locate the financial advisory firm named “Financial Specialist Advisors.” Ah, then there was Google Local. A Googley spin on Yellow Pages. And there have been other innovations to make it possible for advertisers of any size to get traffic, not much because small advertisers spend small money. But ad dollars are what keeps Googzilla alive.

Net net: Keep in mind that Google wants to be the Internet. (AMP that up, folks.) Google wants people to trust the friendly beastie. The Googzilla is into responsibility. The Google is truth, justice, and the digital way. Is the criticism of the Google warranted? Sure, constructive criticism is a positive for some. The problem I have is that it is 20 years too late. Who cares? The EU seems to have an interest.

Stephen E Arnold, June 21, 2023

The Famous Google Paper about Attention, a Code Word for Transformer Methods

June 20, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Wow, many people are excited a Bloomberg article calledThe AI Boom Has Silicon Valley on Another Manic Quest to Change the World: A Guide to the New AI Technologies, Evangelists, Skeptics and Everyone Else Caught Up in the Flood of Cash and Enthusiasm Reshaping the Industry.”

In the tweets and LinkedIn posts one small factoid is omitted from the second hand content. If you want to read the famous DeepMind-centric paper which doomed the Google Brain folks to watch their future from the cheap seats, you can find “Attention Is All You Need”, branded with the imprimatur of the Neural Information Processing Systems Conference held in 2017. Here’s the link to the paper.

For those who read the paper, I would like to suggest several questions to consider:

  1. What economic gain does Google derive from proliferation of its transformer system and method; for example, the open sourcing of the code?
  2. What does “attention” mean for [a] the cost of training and [b] the ability to steer the system and method? (Please, consider the question from the point of view of the user’s attention, the system and method’s attention, and a third-party meta-monitoring system such as advertising.)
  3. What other tasks of humans, software, and systems can benefit from the user of the Transformer system and methods?

I am okay with excitement for a 2017 paper, but including a link to the foundation document might be helpful to some, not many, but some.

Net net: Think about Google’s use of the word “trust” and “responsibility” when you answer the three suggested questions.

Stephen E Arnold, June 20, 2023

Google: Smart Software Confusion

June 19, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I cannot understand. Not only am I old; I am a dinobaby. Furthermore, I am like one of William James’s straw men: Easy to knock down or set on fire. Bear with me this morning.

I read “Google Skeptical of AI: Google Doesn’t Trust Its Own AI Chatbots, Asks Employees Not to Use Bard.” The write up asserts as “real” information:

It seems that Google doesn’t trust any AI chatbot, including its own Bard AI bot. In an update to its security measures, Alphabet Inc., Google’s parent company has asked its employees to keep sensitive data away from public AI chatbots, including their own Bard AI.

The go-to word for the Google in the last few weeks is “trust.” The quote points out that Google doesn’t “trust” its own smart software. Does this mean that Google does not “trust” that which it created and is making available to its “users”?

6 17 google gatekeeper

MidJourney, an interesting but possibly insecure and secret-filled smart software system, generated this image of Googzilla as a gatekeeper. Are gatekeepers in place to make money, control who does what, and record the comings and goings of people, data, and content objects?

As I said, I am a dinobaby, and I think I am dumb. I don’t follow the circular reasoning; for example:

Google is worried that human reviewers may have access to the chat logs that these chatbots generate. AI developers often use this data to train their LLMs more, which poses a risk of data leaks.

Now the ante has gone up. The issue is one of protecting itself from its own software. Furthermore, if the statement is accurate, I take the words to mean that Google’s Mandiant-infused, super duper, security trooper cannot protect Google from itself.

Can my interpretation be correct? I hope not.

Then I read “This Google Leader Says ML Infrastructure Is Conduit to Company’s AI Success.” The “this” refers to an entity called Nadav Eiron, a Stanford PhD and Googley wizard. The use of the word “conduit” baffles me because I thought “conduit” was a noun, not a verb. That goes to support my contention that I am a dumb humanoid.

Now let’s look at the text of this write up about Google’s smart software. I noted this passage:

The journey from a great idea to a great product is very, very long and complicated. It’s especially complicated and expensive when it’s not one product but like 25, or however many were announced that Google I/O. And with the complexity that comes with doing all that in a way that’s scalable, responsible, sustainable and maintainable.

I recall someone telling me when I worked at a Fancy Dan blue chip consulting firm, “Stephen, two objectives are zero objectives.” Obviously Google is orders of magnitude more capable than the bozos at the consulting company. Google can do 25 objectives. Impressive.

I noted this statement:

we created the OpenXLA [an open-source ML compiler ecosystem co-developed by AI/ML industry leaders to compile and optimize models from all leading ML frameworks] because the interface into the compiler in the middle is something that would benefit everybody if it’s commoditized and standardized.

I think this means that Google wants to be the gatekeeper or man in the middle.

Now let’s consider the first article cited. Google does not want its employees to use smart software because it cannot be trusted.

Is it logical to conclude that Google and its partners should use software which is not trusted? Should Google and its partners not use smart software because it is not secure? Given these constraints, how does Google make advances in smart software?

My perception is:

  1. Google is not sure what to do
  2. Google wants to position its untrusted and insecure software as the industry standard
  3. Google wants to preserve its position in a workflow to maximize its profit and influence in markets.

You may not agree. But when articles present messages which are alarming and clearly focused on market control, I turn my skeptic control knob. By the way, the headline should be “Google’s Nadav Eiron Says Machine Learning Infrastructure Is a Conduit to Facilitate Google’s Control of Smart Software.”

Stephen E Arnold, June 19, 2023

Is Smart Software Above Navel Gazing: Nope, and It Does Not Care

June 15, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumb_thumb_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Synthetic data. Statistical smoothing. Recursive methods. When we presented our lecture “OSINT Blindspots” at the 2023 National Cyber Crime Conference, the audience perked up. The terms might have been familiar, but our framing caught the more than 100 investigators’ attention. The problem my son (Erik) and I described was butt simple: Faked data will derail a prosecution if an expert witness explains that machine-generated output may be wrong.

We provided some examples, ranging from a respected executive who obfuscates his “real” business from a red-herring business. We profiled how information about a fervid Christian adherence to God’s precepts overshadowed a Ponzi scheme. We explained how an American living in Eastern Europe openly flaunts social norms in order to distract authorities from an encrypted email business set up to allow easy, seamless communication for interesting people. And we included more examples.

6 14 how long befoe...

An executive at a big time artificial intelligence firm looks over his domain and asks himself, “How long will it take for the boobs and boobettes to figure out that our smart software is wonky?” The illustration was spit out by the clever bits and bytes at MidJourney.

What’s the point in this blog post? Who cares besides analysts, lawyers, and investigators who have to winnow facts which are verifiable from shadow or ghost information activities?

It turns out that a handful of academics seem to have an interest in information manipulation. Their angle of vision is broader than my team’s. We focus on enforcement; the academics focus on tenure or getting grants. That’s okay. Different points of view lead to interesting conclusions.

Consider this academic and probably tough to figure out illustration from “The Curse of Recursion: Training on Generated Data Makes Models Forget”:

image

A less turgid summary of the researchers’ findings appears at this location.

The main idea is that gee-whiz methods like Snorkel and small language models have an interesting “feature.” They forget; that is, as these models ingest fake data they drift, get lost, or go off the rails. Synthetic cloth, unlike natural cotton T shirts, look like shirts. But on a hot day, those super duper modern fabrics can cause a person to perspire and probably emit unusual odors.

The authors introduce and explain “model collapse.” I am no academic. My interpretation of the glorious academic prose is that the numerical recipes, systems, and methods don’t work like the nifty demonstrations. In fact, over time, the models degrade. The hapless humanoids who are dependent on these lack the means to figure out what’s on point and what’s incorrect. The danger, obviously, is that clueless and lazy users of smart software make more mistakes in judgment than a person might otherwise reach.

The paper includes fancy mathematics and more charts which do not exactly deliver on the promise of a picture is worth a thousand words. Let me highlight one statement from the journal article:

Our evaluation suggests a “first mover advantage” when it comes to training models such as LLMs. In our work we demonstrate that training on samples from another generative model can induce a distribution shift, which over time causes Model Collapse. This in turn causes the model to mis-perceive the underlying learning task. To make sure that learning is sustained over a long time period, one needs to make sure that access to the original data source is preserved and that additional data not generated by LLMs remain available over time. The need to distinguish data generated by LLMs from other data raises questions around the provenance of content that is crawled from the Internet: it is unclear how content generated by LLMs can be tracked at scale. One option is community-wide coordination to ensure that different parties involved in LLM creation and deployment share the information needed to resolve questions of provenance. Otherwise, it may become increasingly difficult to train newer versions of LLMs without access to data that was crawled from the Internet prior to the mass adoption of the technology, or direct access to data generated by humans at scale.

Bang on.

What the academics do not point out are some “real world” business issues:

  1. Solving this problem costs money; the point of synthetic and machine-generated data is to reduce costs. Cost reduction wins.
  2. Furthermore, fixing up models takes time. In order to keep indexes fresh, delays are not part of the game plan for companies eager to dominate a market which Accenture pegs as worth trillions of dollars. (See this wild and crazy number.)
  3. Fiddling around to improve existing models is secondary to capturing the hearts and minds of those eager to worship a few big outfits’ approach to smart software. No one wants to see the problem because that takes mental effort. Those inside one of firms vying to own information framing don’t want to be the nail that sticks up. Not only do the nails get pounded down, they are forced to leave the platform. I call this the Dr. Timnit Gebru effect.

Net net: Good paper. Nothing substantive will change in the short or near term.

Stephen E Arnold, June 15, 2023

Two Creatures from the Future Confront a Difficult Puzzle

June 15, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I was interested in a suggestion a colleague made to me at lunch. “Check out the new printed World Book encyclopedia.”

I replied, “A new one. Printed? Doesn’t information change quickly today.”

My lunch colleague said, “That’s what I have heard.”

I offered, “Who wants a printed, hard-to-change content objects? Where’s the fun in sneaky or sockpuppet edits? Do you really want to go back to non-fluid information?”

My hungry debate opponent said, “What? Do you mean misinformation is good?”

I said, “It’s a digital world. Get with the program.”

Navigate to World Book.com and check out the 10 page sample about dinosaurs. When I scanned the entry, there was no information about dinobabies. I was disappointed because the dinosaur segment is bittersweet for these reasons:

  1. The printed encyclopedia is a dinosaur of sorts, an expensive one to produce and print at that
  2. As a dinobaby, I was expecting an IBM logo or maybe an illustration of a just-RIF’ed IBM worker talking with her attorney about age discrimination
  3. Those who want to fill a bookshelf can buy books at a second hand bookstore or connect with a zippy home designer to make the shelf tasteful. I think there is wallpaper of books on a shelf as an alternative.

69 aliens with book

Two aliens are trying to figure out what a single volume of a World Book encyclopedia contains? I assume the creatures will be holding the volume 6 “I”, the one with information about the Internet. The image comes from the creative bits at MidJourney.

Let me dip into my past. Ah, you are not interested? Tough. Here we go down memory lane:

In 1953 or 1954, my father had an opportunity to work in Brazil. Off our family went. One of the must-haves was a set of World Book encyclopedias. The covers were brown; the pictures were most black and white; and the information was, according to my parents, accurate.

The schools in Campinas, Brazil, at that time used one language. Portuguese. No teacher spoke English. Therefore, after failing every class except mathematics, my parents decided to get me a tutor. The course work was provided by something called Calvert in Baltimore, Maryland. My teacher would explain the lesson, watch me read, ask me a couple of questions, and bail out after an hour or two. That lasted about as long as my stint in the Campinas school near our house. My tutor found himself on the business end of a snake. The snake lived; the tutor died.

My father — a practical accountant — concluded that I should read the World Book encyclopedia. Every volume. I think there were about 20 plus a couple of annual supplements. My mother monitored my progress and made me write summaries of the “interesting” articles. I recall that interesting or not, I did one summary a day and kept my parents happy.

I hate World Books. I was in the fourth or fifth grade. Campinas had great weather. There were many things to do. Watch the tarantulas congregate in our garage. Monitor the vultures circling my mother when she sunbathed on our deck. Kick a soccer ball when the students got out of school. (I always played. I sucked, but I had a leather, size five ball. Prior to our moving to the neighborhood, the kids my age played soccer with a rock wrapped in rags. The ball was my passport to an abuse free stint in rural Brazil.)

But a big chunk of my time was gobbled by the yawing white maw of a World Book.

When we returned to the US, I entered the seventh grade. No one at the public school in Illinois asked about my classes in Brazil. I just showed up in Miss Soape’s classroom and did the assignments. I do know one thing for sure: I was the only student in my class who did not have to read the assigned work. Reading the World Book granted me a free ride through grade school, high school, and the first couple of years at college.

Do I recommend that grade school kids read the World Book cover to cover?

No, I don’t. I had no choice. I had no teacher. I had no radio because the electricity was on several hours a day. There was no TV because there were no broadcasts in Campinas. There were no English language anything. Thus, the World Book, which I hate, was the only game in town.

Will I buy the print edition of the 2023 World Book? Not a chance.

Will other people? My hunch is that sales will be a slog outside of library acquisitions and a few interior decorators trying to add color to a client’s book shelf.

I may be a dinobaby, but I have figured out how to look up information online.

The book thing: I think many young people will be as baffled about an encyclopedia as the two aliens in the illustration.

By the way, the full set is about $1,200. A cheap smartphone can be had for about $250. What will kids use to look up information? If you said, the printed encyclopedia, you are a rare bird. If you move to a remote spot on earth, you will definitely want to lug a set with you. Starlink can be expensive.

Stephen E Arnold, June 14, 2023

Smart Software: The Dream of Big Money Raining for Decades

June 14, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The illustration — from the crafty zeros and ones at MidJourney — depicts a young computer scientist reveling in the cash generated from his AI-infused innovation.

6 10 raining cash

For a budding wizard, the idea of cash falling around the humanoid is invigorating. It is called a “coder’s high” or Silicon Valley fever. There is no known cure, even when FTX-type implosions doom a fellow traveler to months of litigation and some hard time among individuals typically not in an advanced math program.

Where’s the cyclone of cash originate?

I would submit that articles like “Generative AI Revenue Is Set to Reach US$1.3 Trillion in 2032” are like catnip to a typical feline living amidst the cubes at a Google-type company or in the apartment of a significant other adjacent a blue chip university in the US.

Here’s the chart that makes it easy to see the slope of the growth:

image

I want to point out that this confection is the result of the mid tier outfit IDC and the fascinating Bloomberg terminal. Therefore, I assume that it is rock solid, based on in-depth primary research, and deep analysis by third-party consultants. I do, however, reserve the right to think that the chart could have been produced by an intern eager to hit the gym and grabbing a sushi special before the good stuff was gone.

Will generative AI hit the $1.3 trillion target in nine years? In the hospital for recovering victims of spreadsheet fever, the coder’s high might slow recovery. But many believe — indeed, fervently hope to experience the realities of William James’s mystics in his Varieties of Religious Experience.

My goodness, the vision of money from Generative AI is infectious. So regulate mysticism? Erect guard rails to prevent those with a coder’s high from driving off the Information Superhighway?

Get real.

Stephen E Arnold, June 12, 2023

Can One Be Accurate, Responsible, and Trusted If One Plagiarizes

June 14, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Now that AI is such a hot topic, tech companies cannot afford to hold back due to small flaws. Like a tendency to spit out incorrect information, for example. One behemoth seems to have found a quick fix for that particular wrinkle: simple plagiarism. Eager to incorporate AI into its flagship Search platform, Google recently released a beta version to select users. Forbes contributor Matt Novak was among the lucky few and shares his observations in, “Google’s New AI-Powered Search Is a Beautiful Plagiarism Machine.”

The author takes us through his query and results on storing live oysters in the fridge, complete with screenshots of the Googlebot’s response. (Short answer: you can for a few days if you cover them with a damp towel.) He highlights passages that were lifted from websites, some with and some without tiny tweaks. To be fair, Google does link to its source pages alongside the pilfered passages. But why click through when you’ve already gotten what you came for? Novak writes:

“There are positive and negative things about this new Google Search experience. If you followed Google’s advice, you’d probably be just fine storing your oysters in the fridge, which is to say you won’t get sick. But, again, the reason Google’s advice is accurate brings us immediately to the negative: It’s just copying from websites and giving people no incentive to actually visit those websites.

Why does any of this matter? Because Google Search is easily the biggest driver of traffic for the vast majority of online publishers, whether it’s major newspapers or small independent blogs. And this change to Google’s most important product has the potential to devastate their already dwindling coffers. … Online publishers rely on people clicking on their stories. It’s how they generate revenue, whether that’s in the sale of subscriptions or the sale of those eyeballs to advertisers. But it’s not clear that this new form of Google Search will drive the same kind of traffic that it did over the past two decades.”

Ironically, Google’s AI may shoot itself in the foot by reducing traffic to informative websites: it needs their content to answer queries. Quite the conundrum it has made for itself.

Cynthia Murrell, June 14, 2023

Sam AI-man Speak: What I Meant about India Was… Really, Really Positive

June 13, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I have noted Sam AI-man of OpenAI and his way with words. I called attention to an article which quoted him as suggesting that India would be forever chasing the Usain Bolt of smart software. Who is that? you may ask. The answer is, Sam AI-man.

6 12 robot helping sam aiman

MidJourney’s incredible insight engine generated an image of a young, impatient business man getting a robot to write his next speech. Good move, young business man. Go with regressing to the norm and recycling truisms.

The remarkable explainer appears in “Unacademy CEO Responds To Sam Altman’s Hopeless Remark; Says Accept The Reality.” Here’s the statement I noted:

Following the initial response, Altman clarified his remarks, stating that they were taken out of context. He emphasized that his comments were specifically focused on the challenge of competing with OpenAI using a mere $10 million investment. Altman clarified that his intention was to highlight the difficulty of attempting to rival OpenAI under such constrained financial circumstances. By providing this clarification, he aimed to address any misconceptions that may have arisen from his earlier statement.

To see the original “hopeless” remark, navigate to this link.

Sam AI-man is an icon. My hunch is that his public statements have most people in awe, maybe breathless. But India as hopeless in smart software. Just not too swift. Why not let ChatGPT craft one’s public statements. Those answers are usually quite diplomatic, even if wrong or wonky some times.

Stephen E Arnold, June 13, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta