Security Debt: So Just Be a Responsible User / Developer

February 15, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Security appears to be one of the next big things. Smart software strapped onto to cyber safeguard systems is a no-lose proposition for vendors. Does it matter that bolted on AI may not work? Nope. The important point is to ride the opportunity wave.

What’s interesting is that security is becoming a topic discussed at 75-something bridge groups and at lunch gatherings in government agencies concerned about fish and trees. Can third-party security services, grandmothers chasing a grand slam, or an expert in river fowl address security problems? I would suggest that the idea that security is the user’s responsibility is an interesting way to dodge responsibility. The estimable 23andMe tried this play, and I am not too sure that it worked.

image

Can security debt become the invisible hand creating opportunities for bad actors? Has the young executive reached the point of no return for a personal debt crisis? Thanks, MSFT Pilot Bing for a good enough illustration.

Who can address the security issues in the software people and organizations use today. “Why Software Security Debt Is Becoming a Serious Problem for Developers” states:

Over 70% of organizations have software containing flaws that have remained unfixed for longer than a year, constituting security debt,

Plus, the article asserts:

46% of organizations were found to have persistent, high-severity flaws that went unaddressed for over a year

Security issues exist. But the question is, “Who will address these flaws, gaps, and mistakes?”

The article cites an expert who opines:

“The further that you shift [security testing] to the developer’s desktop and have them see it as early as possible so they can fix it, the better, because number one it’s going to help them understand the issue more and [number two] it’s going to build the habits around avoiding it.”

But who is going to fix the security problems?

In-house developers may not have the expertise or access to the uncompiled code to identify and remediate. Open source and other third-party software can change without notice because why not do what’s best for those people or the bad actors manipulating open source software and “approved” apps available from a large technology company’s online store.

The article offers a number of suggestions, but none of these strike me as practical for some or most organizations.

Here’s the problem: Security is not a priority until a problem surfaces. Then when a problem becomes known, the delay between compromise, discovery, and public announcement can be — let’s be gentle — significant. Once a cyber security vendor “discovers” the problem or learns about it from a customer who calls and asks, “What has happened?”, the PR machines grind into action.

The “fixes” are typically rush jobs for these reasons:

  1. The vendor and the developer who made the zero a one does not earn money by fixing old code. Another factor is that the person or team responsible for the misstep is long gone, working as an Uber driver, or sitting in a rocking chair in a warehouse for the elderly
  2. The complexity of “going back” and making a fix may create other problems. These dependencies are unknown, so a fix just creates more problems. Writing a shim or wrapper code may be good enough to get the angry dogs to calm down and stop barking.
  3. The security flaw may be unfixable; that is, the original approach includes and may need flaws for performance, expediency, or some quite revenue-centric reason. No one wants to rebuild a Pinto that explodes in a rear end collision. Let the lawyers deal with it. When it comes to code, lawyers are definitely equipped to resolve security problems.

The write up contains a number of statistics, but it makes one major point:

Security debt is mounting.

Like a young worker who lives by moving credit card debt from vendor to vendor, getting out of the debt hole may be almost impossible. But, hey, it is that individual’s responsibility, not the system. Just be responsible. That is easy to say, and it strikes me as somewhat hollow.

Stephen E Arnold, February 15, 2024

Amazon: The Online Bookstore Has a Wet Basement and Termites

February 15, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read a less-than-positive discussion of my favorite online bookstore Amazon. The analysis appears in the “real” news publication New York Magazine. The essay is a combo: Some news, some commentary, some management suggestions.

image

Two dinobabies are thinking about the good old days at Amazon. Thanks, MSFT Copilot. Your indigestion on February 9, 2024, appears to have worked itself out. How’s that security coming along? Heh heh heh.

In my opinion, the news hook for “The Junkification of Amazon: Why Does It Feel Like the Company Is Making Itself Worse?” is that Amazon needs to generate revenue, profits, and thrill pulses for stakeholders. I understand this idea. But there is a substantive point tucked into the write up. Here it is:

The view of Amazon from China is worth considering everywhere. Amazon lets Chinese manufacturers and merchants sell directly to customers overseas and provides an infrastructure for Prime shipping, which is rare and enormously valuable. It also has unilateral power to change its policies or fees and to revoke access to these markets in an instant

Amazon has found Chinese products a useful source of revenue. What I think is important is that Temu is an outfit focused on chopping away at Amazon’s vines around the throats of its buyers and sellers. My hunch is that Amazon is not able to regain the trust buyers and sellers once had in the company. The article focuses on “junkification.” I think there is a simpler explanation; to wit:

Amazon has fallen victim to decision craziness. Let me offer a few suggestions.

First, consider the Kindle. A person who reads licenses an ebook for a Kindle. The Kindle software displays:

  • Advertisements which are intended to spark another purchase
  • An interface which does not provide access to the specific ebooks stored on the device
  • A baffling collection of buttons, options, and features related to bookmarks and passages a reader finds interesting. However, the tools are non-functional when someone like me reads content like the Complete Works of William James or keeps a copy of the ever-popular Harvard “shelf of books” on a Kindle.

For me, the Kindle is useless, so I have switched to reading ebooks on my Apple iPad. At least, I can figure out what’s on the device, what’s available from the Apple store, and where the book I am currently reading is located. However, Amazon has not been thinking about how to make really cheap Kindle more useful to people who still read books.

A second example is the wild and crazy collection of Amazon.com features. I attempted to purchase a pair of grey tactical pants. I found the fabric I wanted. I skipped the weird pop ups. I ignored the videos. And the reviews? Sorry. Sales spam. I located the size I needed. I ordered. The product would arrive two days after I ordered. Here’s what happened:

  • The pants were marked 32 waist, 32 inseam, but the reality was a 28 inch waist and a 28 inch inseam. The fix? I ordered the pants directly from the US manufacturer and donated the pants to the Goodwill.
  • Returns at Amazon are now a major hassle at least in Prospect, Kentucky.
  • The order did not come in two days as promised. The teeny weensy pants came in five days. The norm? Incorrect delivery dates. Perfect for porch pirates, right?

A third example is one I have mentioned in this blog and in my lectures about online fraud. I ordered a CPU. Amazon shipped me a pair of red panties. Nope, neither my style nor a CPU. About 90 days after the rather sporty delivery, emails, and an article in this blog, Amazon refunded my $550. The company did not want me to return the red panties. I have them hanging on my server room’s Movin’ Cool air conditioner.

The New York Magazine article does not provide much about what’s gone wrong at Amazon. I think my examples make clear these management issues:

  1. Decisions are not customer centric. Money is more important that serving the customer which is a belabored point in numerous Jeff Bezos letters before he morphed into a Miami social magnet.
  2. The staff at Amazon have no clue about making changes that ensure a positive experience for buyers or sellers. Amazon makes decisions to meet goals, check off an item on a to do list, or expend the minimum amount of mental energy to provide a foundation for better decisions for buyers and sellers.
  3. Amazon’s management is unable to prevent decision rot in several, quite different businesses. The AWS service has Byzantine pricing and is struggling to remain competitive in the midst of AI craziness. The logistics business cannot meet delivery targets displayed to a customer when he or she purchases a product. The hardware business is making customers more annoyed than at any previous time. Don’t believe me? Just ask a Ring customer about the price increase or an Amazon Prime customer about advertising in Amazon videos. And Kindle users? It is obvious no one at Amazon pays much attention to Kindle users so why start now? The store front functions are from Bizarro World. I have had to write down on notecards where to find my credit card “points,” how to navigate directly to listings for used music CDs, where my licensed Amazon eBooks reside and once there what the sort options actually do, and what I need to do when a previously purchased product displays lawn mowers, not men’s white T shirts.

Net net: I appreciate the Doctorow-esque word “junkification.” That is close to what Amazon is doing: Converting products and services into junk. Does Amazon’s basement have a leak? Are those termites up there?

Stephen E Arnold, February 15, 2024

Developers, AI Will Not Take Your Jobs… Yet

February 15, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

It seems programmers are safe from an imminent AI jobs takeover. The competent ones, anyway. LeadDev reports, “Researchers Say Generative AI Isn’t Replacing Devs Any Time Soon.” Generative AI tools have begun to lend developers a helping hand, but nearly half of developers are concerned they might loose their jobs to their algorithmic assistants.

image

Another MSFT Copilot completely original Bing thing. Good enough but that fellow sure looks familiar.

However, a recent study by researchers from Princeton University and the University of Chicago suggests they have nothing to worry about: AI systems are far from good enough at programming tasks to replace humans. Writer Chris Stokel-Walker tells us the researchers:

“… developed an evaluation framework that drew nearly 2,300 common software engineering problems from real GitHub issues – typically a bug report or feature request – and corresponding pull requests across 12 popular Python repositories to test the performance of various large language models (LLMs). Researchers provided the LLMs with both the issue and the repo code, and tasked the model with producing a workable fix, which was tested after to ensure it was correct. But only 4% of the time did the LLM generate a solution that worked.”

Researcher Carlos Jimenez notes these problems are very different from those LLMs are usually trained on. Specifically, the article states:

“The SWE-bench evaluation framework tested the model’s ability to understand and coordinate changes across multiple functions, classes, and files simultaneously. It required the models to interact with various execution environments, process context, and perform complex reasoning. These tasks go far beyond the simple prompts engineers have found success using to date, such as translating a line of code from one language to another. In short: it more accurately represented the kind of complex work that engineers have to do in their day-to-day jobs.”

Will AI someday be able to perform that sort of work? Perhaps, but the researchers consider it more likely we will never find AI coding independently. Instead, we will continue to need human developers to oversee algorithms’ work. They will, however, continue to make programmers’ jobs easier. If Jimenez and company are correct, developers everywhere can breathe a sigh of relief.

Cynthia Murrell, February 15, 2024

Topicfinder and Its List of Free PR Sites

February 14, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I noted “40+ Free Sites to Post a Company’s Press Release (Updated).” The “news” is that the list has been updated. What makes this list interesting to penny-pinching marketers is that the sites are “free.” However, it is a good idea to read about each site’s options and terms of service.

image

Free can be a powerful magnet. Thanks Google Bard or Gemini or AI Test Kitchen whatever.

The listing is broken into four categories:

  1. The free press release submission list. The sites listed have registration and review processes for obvious reasons; namely, promoting illegal products and services and other content which can spark litigation or retribution. A short annotation accompanies each item.
  2. A list of “niche” free press release sites. The idea is that some free services want a certain type of content; for example, a technical slant or tourist content.
  3. A list of sites which now charge for press release distribution.
  4. A list of dead press release distribution sites.

Is the list comprehensive? No. Plus, release aggregation sites like Newswise are not included.

Several suggestions:

  1. The lists do not include the sometimes “interesting” outfits operating on the margins of the marketing world. One example we researched was the outfit doing business as the icrowdnewswire.
  2. For fee services are useful because a number of these firms have “relationships” with major search engines so that placement is allegedly “guaranteed.” Examples include PRUnderground, Benzinga, and others.
  3. The press release service may not offer a “forever archive”; that is, the press release content is disappeared to either save money or because old content is deemed to have zero click value to the distribution shop.

If you want to give “free” press releases a whirl, Topicfinder’s listing may be a useful starting point. OSINT experts may find some content gems pushed out from these services. Adding these to a watch list may be useful.

Keep in mind that once one registers, a bit of AI orchestration and some ChatGPT-type magic can create a news release blaster. Posting releases one-by-one is very yesterday.

Stephen E Arnold, February 14, 2024

Is AI Another VisiCalc Moment?

February 14, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The easy-to-spot orange newspaper ran a quite interesting “essay” called “What the Birth of the Spreadsheet Can Teach Us about Generative AI.” Let me cut to the point when the fox is killed. AI is likely to be a job creator. AI has arrived at “the right time.” The benefits of smart software are obvious to a growing number of people. An entrepreneur will figure out a way to sell an AI gizmo that is easy to use, fast, and good enough.

In general, I agree. There is one point that the estimable orange newspaper chose not to include. The VisiCalc innovation converted old-fashioned ledger paper into software which could eliminate manual grunt work to some degree. The poster child of the next technology boom seems tailor-made to facilitate surveillance, weapons, and development of novel bio-agents.

image

AI is going to surprise some people more than others. Thanks, MSFT Copilot Bing thing. Not good but I gave up with the prompts to get a cartoon because you want to do illustrations. Sigh.

I know that spreadsheets are used by defense contractors, but the link between a spreadsheet and an AI-powered drone equipped with octanitrocubane variants is less direct. Sure, spreadsheets arrived in numerous use cases, some obvious, some not. But the capabilities for enabling a range of weapons systems strike me as far more obvious.

The Financial Times’s essay states:

Looking at the way spreadsheets are used today certainly suggests a warning. They are endlessly misused by people who are not accountants and are not using the careful error-checking protocols built into accountancy for centuries. Famous economists using Excel simply failed to select the right cells for analysis. An investment bank used the wrong formula in a risk calculation, accidentally doubling the level of allowable risk-taking. Biologists have been typing the names of genes, only to have Excel autocorrect those names into dates. When a tool is ubiquitous, and convenient, we kludge our way through without really understanding what the tool is doing or why. And that, as a parallel for generative AI, is alarmingly on the nose.

Smart software, however, is not a new thing. One can participate in quasi-religious disputes about whether AI is 20, 30, 40, or more years old. What’s interesting to me is that after chugging along like a mule cart on the Information Superhighway, AI is everywhere. Old-school British newspapers like it to the spreadsheet. Entrepreneurs spend big bucks on Product Hunt roll outs. Owners of mobile devices can locate “pizza near me” without having to type, speak, or express an interest in a cardiologist’s favorite snack.

AI strikes me as a different breed of technology cat. Here are my reasons:

  1. Serious AI takes serious money.
  2. Big AI is going to be a cloud-linked service which invites consolidation just like those hundreds of US railroads became the glorious two player system we have today: One for freight and one for passengers who love trains more than flying or driving.
  3. AI systems are going to have to find a way to survive and thrive without becoming victims of content inbreeding and bizarre outputs fueled by synthetic data. VisiCalc spawned spreadsheet fever in humans from the outset. The difference is that AI does its work largely without humanoids.

Net net: The spreadsheet looks like a convenient metaphor. But metaphors are not the reality. Reality can surprise in interesting ways.

Stephen E Arnold, February 14, 2024

It Works for SEO and Narcotics… and Academics

February 14, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Academic research papers that have been cited often are probably credible, right? These days, not so much. Science reports, “Citation Cartels Help Some Mathematicians – and their Universities – Climb the Rankings.” Referring to an analysis by University of Vigo’s Domingo Docampo, writer Michele Catanzaro tells us:

“Cliques of mathematicians at institutions in China, Saudi Arabia, and elsewhere have been artificially boosting their colleagues’ citation counts by churning out low-quality papers that repeatedly reference their work, according to an unpublished analysis seen by Science. As a result, their universities—some of which do not appear to have math departments—now produce a greater number of highly cited math papers each year than schools with a strong track record in the field, such as Stanford and Princeton universities. These so-called ‘citation cartels’ appear to be trying to improve their universities’ rankings, according to experts in publication practices. ‘The stakes are high—movements in the rankings can cost or make universities tens of millions of dollars,’ says Cameron Neylon, a professor of research communication at Curtin University. ‘It is inevitable that people will bend and break the rules to improve their standing.’ In response to such practices, the publishing analytics company Clarivate has excluded the entire field of math from the most recent edition of its influential list of authors of highly cited papers, released in November 2023.”

image

Thanks MSFT Copilot Bing thing. You are mostly working today. Actually well enough for good enough art.

Researchers say this manipulation occurs across disciplines, but the relatively low number of published math papers makes it more obvious in that field. When Docampo noticed the trend, the mathematician analyzed 15 years’ worth of Clarivate’s data to determine which universities were publishing highly cited math papers and who was citing them. Back in 2008 – 2010, legitimately heavy-hitters like UCLA and Princeton were at the top of the cited list. But in the last few years those were surpassed by institutions not exactly known for their mathematics prowess. Many were based in China, Saudi Arabia, and Egypt. And, yes, those citations were coming from inside the writers’ own schools. Sneaky. But not sneaky enough.

There may again come a time when citations can be used as a metric for reliability. Docampo is working on a system to weigh citations according to the quality of the citing journals and institutions. Until then, everyone should take citation counts with a grain of salt.

Cynthia Murrell, February 14, 2024

A Xoogler Explains AI, News, Inevitability, and Real Business Life

February 13, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read an essay providing a tiny bit of evidence that one can take the Googler out of the Google, but that Xoogler still retains some Googley DNA. The item appeared in the Bezos bulldozer’s estimable publication with the title “The Real Wolf Menacing the News Business? AI.” Absolutely. Obviously. Who does not understand that?

image

A high-technology sophist explains the facts of life to a group of listeners who are skeptical about artificial intelligence. The illustration was generated after three tries by Google’s own smart software. I love the miniature horse and the less-than-flattering representation of a sales professional. That individual looks like one who would be more comfortable eating the listeners than convincing them about AI’s value.

The essay contains a number of interesting points. I want to highlight three and then, as I quite enjoy doing, I will offer some observations.

The author is a Xoogler who served from 2017 to 2023 as the senior director of news ecosystem products. I quite like the idea of a “news ecosystem.” But ecosystems as some who follow the impact of man on environments can be destroyed or pushed to the edge of catastrophe. In the aftermath of devastation coming from indifferent decision makers, greed fueled entrepreneurs, or rhinoceros poachers, landscapes are often transformed.

First, the essay writer argues:

The news publishing industry has always reviled new technology, whether it was radio or television, the internet or, now, generative artificial intelligence.

I love the word “revile.” It suggests that ignorant individuals are unable to grasp the value of certain technologies. I also like the very clever use of the word “always.” Categorical affirmatives make the world of zeros and one so delightfully absolute. We’re off to a good start I think.

Second, we have a remarkable argument which invokes another zero and one type of thinking. Consider this passage:

The publishers’ complaints were premised on the idea that web platforms such as Google and Facebook were stealing from them by posting — or even allowing publishers to post — headlines and blurbs linking to their stories. This was always a silly complaint because of a universal truism of the internet: Everybody wants traffic!

I love those universal truisms. I think some at Google honestly believe that their insights, perceptions, and beliefs are the One True Path Forward. Confidence is good, but the implication that a universal truism exists strikes me as information about a psychological and intellectual aberration. Consider this truism offered by my uneducated great grandmother:

Always get a second opinion.

My great grandmother used the logically troublesome word “always.” But the idea seems reasonable, but the action may not be possible. Does Google get second opinions when it decides to kill one of its services, modify algorithms in its ad brokering system, or reorganize its contentious smart software units? “Always” opens the door to many issues.

Publishers (I assume “all” publishers)k want traffic. May I demonstrate the frailty of the Xoogler’s argument. I publish a blog called Beyond Search. I have done this since 2008. I do not care if I get traffic or not. My goal was and remains to present commentary about the antics of high-technology companies and related subjects. Why do I do this? First, I want to make sure that my views about such topics as Google search exist. Second, I have set up my estate so the content will remain online long after I am gone. I am a publisher, and I don’t want traffic, or at least the type of traffic that Google provides. One exception causes an argument like the Xoogler’s to be shown as false, even if it is self-serving.

Third, the essay points its self-righteous finger at “regulators.” The essay suggests that elected officials pursued “illegitimate complaints” from publishers. I noted this passage:

Prior to these laws, no one ever asked permission to link to a website or paid to do so. Quite the contrary, if anyone got paid, it was the party doing the linking. Why? Because everybody wants traffic! After all, this is why advertising businesses — publishers and platforms alike — can exist in the first place. They offer distribution to advertisers, and the advertisers pay them because distribution is valuable and seldom free.

Repetition is okay, but I am able to recall one of the key arguments in this Xoogler’s write up: “Everybody wants traffic.” Since it is false, I am not sure the essay’s argumentative trajectory is on the track of logic.

Now we come to the guts of the essay: Artificial intelligence. What’s interesting is that AI magnetically pulls regulators back to the casino. Smart software companies face techno-feudalists in a high-stakes game. I noted this passage about anchoring statements via verification and just training algorithms:

The courts might or might not find this distinction between training and grounding compelling. If they don’t, Congress must step in. By legislating copyright protection for content used by AI for grounding purposes, Congress has an opportunity to create a copyright framework that achieves many competing social goals. It would permit continued innovation in artificial intelligence via the training and testing of LLMs; it would require licensing of content that AI applications use to verify their statements or look up new facts; and those licensing payments would financially sustain and incentivize the news media’s most important work — the discovery and verification of new information — rather than forcing the tech industry to make blanket payments for rewrites of what is already long known.

Who owns the casino? At this time, I would suggest that lobbyists and certain non-governmental entities exert considerable influence over some elected and appointed officials. Furthermore, some AI firms are moving as quickly as reasonably possible to convert interest in AI into revenue streams with moats. The idea is that if regulations curtail AI companies, consumers would not be well served. No 20-something wants to read a newspaper. That individual wants convenience and, of course, advertising.

Now several observations:

  1. The Xoogler author believes in AI going fast. The technology serves users / customers what they want. The downsides are bleats and shrieks from an outmoded sector; that is, those engaged in news
  2. The logic of the technologist is not the logic of a person who prefers nuances. The broad statements are false to me, for example. But to the Xoogler, these are self-evident truths. Get with our program or get left to sleep on cardboard in the street.
  3. The schism smart software creates is palpable. On one hand, there are those who “get it.” On the other hand, there are those who fight a meaningless battle with the inevitable. There’s only one problem: Technology is not delivering better, faster, or cheaper social fabrics. Technology seems to have some downsides. Just ask a journalist trying to survive on YouTube earnings.

Net net: The attitude of the Xoogler suggests that one cannot shake the sense of being right, entitlement, and logic associated with a Googler even after leaving the firm. The essay makes me uncomfortable for two reasons: [1] I think the author means exactly what is expressed in the essay. News is going to be different. Get with the program or lose big time. And [2] the attitude is one which I find destructive because technology is assumed to “do good.” I am not too sure about that because the benefits of AI are not known and neither are AI’s downsides. Plus, there’s the “everybody wants traffic.” Monopolistic vendors of online ads want me to believe that obvious statement is ground truth. Sorry. I don’t.

Stephen E Arnold, February 13, 2024

AI: Big Ideas and Bigger Challenges for the Next Quarter Century. Maybe, Maybe Not

February 13, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read an interesting ArXiv.org paper with a good title: “Ten Hard Problems in Artificial Intelligence We Must Get Right.” The topic is one which will interest some policy makers, a number of AI researchers, and the “experts” in machine learning, artificial intelligence, and smart software.

The structure of the paper is, in my opinion, a three-legged stool analysis designed to support the weight of AI optimists. The first part of the paper is a compressed historical review of the AI journey. Diagrams, tables, and charts capture the direction in which AI “deep learning” has traveled. I am no expert in what has become the next big thing, but the surprising point in the historical review is that 2010 is the date pegged as the start to the 2016 time point called “the large scale era.” That label is interesting for two reasons. First, I recall that some intelware vendors were in the AI game before 2010. And, second, the use of the phrase “large scale” defines a reality in which small outfits are unlikely to succeed without massive amounts of money.

The second leg of the stool is the identification of the “hard problems” and a discussion of each. Research data and illustrations bring each problem to the reader’s attention. I don’t want to get snagged in the plagiarism swamp which has captured many academics, wives of billionaires, and a few journalists. My approach will be to boil down the 10 problems to a short phrase and a reminder to you, gentle reader, that you should read the paper yourself. Here is my version of the 10 “hard problems” which the authors seem to suggest will be or must be solved in 25 years:

  1. Humans will have extended AI by 2050
  2. Humans will have solved problems associated with AI safety, capability, and output accuracy
  3. AI systems will be safe, controlled, and aligned by 2050
  4. AI will make contributions in many fields; for example, mathematics by 2050
  5. AI’s economic impact will be managed effectively by 2050
  6. Use of AI will be globalized by 2050
  7. AI will be used in a responsible way by 2050
  8. Risks associated with AI will be managed by effectively by 2050
  9. Humans will have adapted its institutions to AI by 2050
  10. Humans will have addressed what it means to be “human” by 2050

Many years ago I worked for a blue-chip consulting firm. I participated in a number of big-idea projects. These ranged from technology, R&D investment, new product development, and the global economy. In our for-fee reports were did include a look at what we called the “horizon.” The firm had its own typographical signature for this portion of a report. I recall learning in the firm’s “charm school” (a special training program to make sure new hires knew the style, approach, and ground rules for remaining employed at that blue-chip firm). We kept the horizon tight; that is, talking about the future was typically in the six to 12 month range. Nosing out 25 years was a walk into a mine field. My boss, as I recall told me, “We don’t do science fiction.”

2 10 robot and person

The smart robot is informing the philosopher that he is free to find his future elsewhere. The date of the image is 2025, right before the new year holiday. Thanks, MidJourney. Good enough.

The third leg of the stool is the academic impedimenta. To be specific, the paper is 90 pages in length of which 30 present the argument. The remain 60 pages present:

  • Traditional footnotes, about 35 pages containing 607 citations
  • An “Electronic Supplement” presenting eight pages of annexes with text, charts, and graphs
  • Footnotes to the “Electronic Supplement” requiring another 10 pages for the additional 174 footnotes.

I want to offer several observations, and I do not want to have these be less than constructive or in any way what one of my professors who was treated harshly in Letters to the Editor for an article he published about Chaucer. He described that fateful letter as “mean spirited.”

  1. The paper makes clear that mankind has some work to do in the next 25 years. The “problems” the paper presents are difficult ones because they touch upon the fabric of social existence. Consider the application of AI to war. I think this aspect of AI may be one to warrant a bullet on AI’s hit parade.
  2. Humans have to resolve issues of automated systems consuming verifiable information, synthetic data, and purpose-built disinformation so that smart software does not do things at speed and behind the scenes. Do those working do resolve the 10 challenges have an ethical compass and if so, what does “ethics” mean in the context of at-scale AI?
  3. Social institutions are under stress. A number of organizations and nation-states operate as dictators. One central American country has a rock star dictator, but what about the rock star dictators working techno feudal companies in the US? What governance structures will be crafted by 2050 to shape today’s technology juggernaut?

To sum up, I think the authors have tackled a difficult problem. I commend their effort. My thought is that any message of optimism about AI is likely to be hard pressed to point to one of the 10 challenges and and say, “We have this covered.” I liked the write up. I think college students tasked with writing about the social implications of AI will find the paper useful. It provides much of the research a fresh young mind requires to write a paper, possibly a thesis. For me, the paper is a reminder of the disconnect between applied technology and the appallingly inefficient, convenience-embracing humans who are ensnared in the smart software.

I am a dinobaby, and let me you, “I am glad I am old.” With AI struggling with go-fast and regulators waffling about go-slow, humankind has quite a bit of social system tinkering to do by 2050 if the authors of the paper have analyzed AI correctly. Yep, I am delighted I am old, really old.

Stephen E Arnold, February 13, 2024

Google Gems: February 5 to 9, 2024

February 13, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Google tallied another bumper week of innovations, news, and management marvels. Let’s take a look.

WE HAVE OUR ACT TOGETHER

The principal story concerns Google’s “answer” to the numerous competitors for smart software. The Gemini subscription service has arrived. Fourteen months after Microsoft caught Googzilla napping near the Foosball table, the quantum supremacy outfit has responded. Google PR received accolades in the Wired article explaining Google’s monumental achievement: A subscription service like OpenAI’s and Microsoft’s.

And in a twist of logic, Google has allegedly alerted users of Gemini (the answer to MSFT and ChatGPT) not to provide confidential or personal data to a Gemini service. With logging, Google’s learning user behaviors, and users general indifference to privacy issues associated with any Web service — why is a special warning needed? “Google Warning: Do Not Divulge Confidential Info or Personal Data When Using Gemini” reports:

Users can also turn off Gemini Apps Activity to stop the collection of conversations but even when it is disabled, Gemini conversations continue to be saved for up to 72 hours to "maintain the safety and security of Gemini apps and improve Gemini apps."

Toss in Google human review and what do you get? A Googley service with a warning.

image

Google inspects its gems. Thanks MSFT Copilot. Good enough.

Second, Google has alleged been taking some liberties with data captured from Danish schools. (Imagine that!) The students use Chromebooks, and these devices seem to be adept at capturing data no matter what the Danish IT administrators do. For reference, see the item about confidential and personal data above, please.  “Denmark Orders Schools to Stop Sending Student Data to Google” reports:

Also, given that restricting sensitive data processing on Google’s end will be hard, if not impossible, for municipalities to assure, there may be no practical way to adhere to the new policies without blocking the use of Google Chromebooks and/or Google Workspace.

Yes, the act is indeed together. Words do not change data collection it seems.

Third, Google published a spyware report. You can download the document from this link. In addition to naming the names of vendors with specialized tools, Google does little to explain why Android based devices are protected from these firms’ software. My thought is that since Google knows what these companies are doing, Google has been making its users and customers more secure. Perhaps Google’s management thinks that talking about spyware is the same as protecting users and customers. The identified vendors are probably delighted to receive free publicity. To Google’s credit it did test a process for protecting users from financial fraud. The report is highlighted with the news about more Chrome security problems.

Google management is the best.

PRODUCT GEMS

I don’t want to overlook Google’s ability to make meaning innovations.

Out of the blocks, I want to mention Google’s announcement that it will create an app for Apple’s $3,500 smart goggles. Google Glass apparently provided some inspiration to the savvy iTunes people.

A second innovation is Google’s ability to deliver higher quality to YouTube streaming video. The service requires paying more money to the Google, but that’s part of the company’s plan to grow despite increasing competition and cost control challenges. Will Google’s method work if the streamer has lousy bandwidth? Sure, sure, Google has confidence in its capabilities despite issues solely within the control of its users and customers.

A third innovation is that Google may offer seven years of updates to Pixel phone users. OnePlus management thinks this is baloney. Seven years is a long time in a Googley world. A quick review of the fate of the Google cache and other products killed by Google reminds one of Google’s concept of commitment. (One rumor is that killing the Google cache extricated Google from paywall bypass services.) The question is, “Will Pinpoint be a Googley way to get information from paywalled content. What is Pinpoint? The explanation is at a really popular site called Journalist Studio. Everyone knows that.

A fourth item repeats an ever more frequent refrain: Google search is meh. Some, however, are just calling the service broken.

Fifth, Google Maps are getting more features. Google Maps for Android mobiles can now display the weather. One may not be able to locate a destination, but one knows the weather.

Sixth, in a breakthrough of significant proportions, Google has announced a new Pixel variant which folds and sports a redesigned camera island. This is not a bump. It is an island obviously.

SERVICE PEARLS

Google continues it march to be the cable service for streaming.

First, Google suggested it had more than eight million “subscribers.” Expressed another way, YouTube is fourth among pay television services.

Also, Google has expressed a desire to get more viewer time than it has in the past.

For those who fancy Google-intermediated ads on Pinterest, that day has arrived.

COURT ACTIVITY

Google continues to be of interest to regulatory officials.

First, Google faces an anti trust trial in the US. The matter is related to the Google’s approach to digital advertising. Advertising, after 25 years of trying to diversify its revenue, still accounts for more than 60 percent of the firm’s revenue.

Second, Google paid to settle a class action lawsuit. The matter was a security failure for a now-dead service called Google Plus. How much did the Google pay? Just $350 million or a month of coffee for thirsty Googlers (estimated, of course).

What will Google do this week? Alas, I cannot predict the future like some savvy bloggers.

Stephen E Arnold, February 13, 2024

Sam AI-Man Puts a Price on AI Domination

February 13, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

AI start ups may want to amp up their fund raising. Optimism and confidence are often perceived as positive attributes. As a dinobaby, I think in terms of finding a deal at the discount supermarket. Sam AI-Man (actually Sam Altman) thinks big. Forget the $5 million investment in a semi-plausible AI play. “Think a bit bigger” is the catchphrase for OpenAI.

2 8 big piles of cash

Thinking billions? You silly goose. Think trillions. Thanks, MidJourney. Close enough, close enough.

How does seven followed by 12 zeros strike you? A reasonable figure. Well, Mr. AI-Man estimates that’s the cost of building world AI dominating chips, content, and assorted impedimenta in a quest to win the AI dust ups in assorted global markets. “OpenAI Chief Sam Altman Is Seeking Up to $7 TRILLION (sic) from Investors Including the UAE for Secretive Project to Reshape the Global Semiconductor Industry” reports:

Altman is reportedly looking to solve some of the biggest challenges faced by the rapidly-expanding AI sector — including a shortage of the expensive computer chips needed to power large-language models like OpenAI’s ChatGPT.

And where does one locate entities with this much money? The news report says:

Altman has met with several potential investors, including SoftBank Chairman Masayoshi Son and Sheikh Tahnoun bin Zayed al Nahyan, the UAE’s head of security.

To put the figure in context, the article says:

It would be a staggering and unprecedented sum in the history of venture capital, greater than the combined current market capitalizations of Apple and Microsoft, and more than the annual GDP of Japan or Germany.

Several observations:

  • The ante for big time AI has gone up
  • The argument for people and content has shifted to chip facilities to fabricate semiconductors
  • The fund-me tour is a newsmaker.

Net net: How about those small search-and-retrieval oriented AI companies? Heck, what about outfits like Amazon, Facebook, and Google?

Stephen E Arnold, February 13, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta