Young People Are Getting News from Sources I Do Not Find Helpful. Sigh.

July 28, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

TikTok Is the Most Popular News Source for 12 to 15-Year-Olds, Says Ofcom” presents some interesting data. First, however, let’s answer the question, “What’s an Ofcom?” It is a UK government agency regulates communication in the UK. From mobile to mail, Ofcom is there. Like most government entities, it does surveys.

Now what did the Ofcom research discover? Here are three items:

7 20 confused student

“You mean people used to hold this grimy paper thing and actually look at it to get information?” asks one young person. The other says, “Yes, I think it is called a maga-bean or maga-zeen or maga-been, something like that.” Thanks for this wonderful depiction of bafflement, MidJourney.

  1. In the UK, those 12 to 15 get their news from TikTok.
  2. The second most popular source of news is the Zuckbook’s Instagram.
  3. Those aged from 16 to 24 are mired in the past, relying on social media and mobile phones.

Interesting, but I was surprised that a traditional printed newspaper did not offer more information about the impact of this potentially significant trend on newspapers, printed books, and printed magazines.

Assuming the data are correct, as those 12 to 15 age, their behavior patterns may suggest that today’s dark days for traditional media were a bright, sunny afternoon.

Stephen E Arnold, July 28, 2023

Digital Delphis: Predictions More Reliable Than Checking Pigeon Innards, We Think

July 28, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

One of the many talents of today’s AI is apparently a bit of prophecy. Interconnected examines “Computers that Live Two Seconds in the Future.” Blogger Matt Webb pulls together three examples that, to him, represent an evolution in computing.

7 22 digital delphi

The AI computer, a digital Delphic oracle, gets some love from its acolytes. One engineer says, “Any idea why the system is hallucinating?” The other engineer replies, “No clue.” MidJourney shows some artistic love to hard-working, trustworthy computer experts.

His first digital soothsayer is Apple’s Vision Pro headset. This device, billed as a “spatial computing platform,” takes melding the real and virtual worlds to the next level. To make interactions as realistic as possible, the headset predicts what a user will do next by reading eye movements and pupil dilation. The Vision Pro even flashes visuals and sounds so fast as to be subliminal and interprets the eyes’ responses. Ingenious, if a tad unsettling.

The next example addresses a very practical problem: WavePredictor from Next Ocean helps with loading and unloading ships by monitoring wave movements and extrapolating the next few minutes. Very helpful for those wishing to avoid cargo sloshing into the sea.

Finally, Webb cites a development that both excites and frightens programmers: GitHub Copilot. Though some coders worry this and similar systems will put them out of a job, others see it more as a way to augment their own brilliance. Webb paints the experience as a thrilling bit of time travel:

“It feels like flying. I skip forwards across real-time when writing with Copilot. Type two lines manually, receive and get the suggestion in spectral text, tab to accept, start typing again… OR: it feels like reaching into the future and choosing what to bring back. It’s perhaps more like the latter description. Because, when you use Copilot, you never simply accept the code it gives you. You write a line or two, then like the Ghost of Christmas Future, Copilot shows you what might happen next – then you respond to that, changing your present action, or grabbing it and editing it. So maybe a better way of conceptualizing the Copilot interface is that I’m simulating possible futures with my prompt then choosing what to actualize. (Which makes me realize that I’d like an interface to show me many possible futures simultaneously – writing code would feel like flying down branching time tunnels.)”

Gnarly dude! But what does all this mean for the future of computing? Even Webb is not certain. Considering operating systems that can track a user’s focus, geographic location, communication networks, and augmented reality environments, he writes:

“The future computing OS contains of the model of the future and so all apps will be able to anticipate possible futures and pick over them, faster than real-time, and so… …? What happens when this functionality is baked into the operating system for all apps to take as a fundamental building block? I don’t even know. I can’t quite figure it out.”

Us either. Stay tuned dear readers. Oh, let’s assume the wizards get the digital Delphic oracle outputting the correct future. You know, the future that cares about humanoids.

Cynthia Murrell, July 28, 2023

AI and Malware: An Interesting Speed Dating Opportunity?

July 27, 2023

Note: Dinobaby here: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid. Services are now ejecting my cute little dinosaur gif. (´?_?`) Like my posts related to the Dark Web, the MidJourney art appears to offend someone’s sensibilities in the datasphere. If I were not 78, I might look into these interesting actions. But I am and I don’t really care.

AI and malware. An odd couple? One of those on my research team explained at lunch yesterday that an enterprising bad actor could use one of the code-savvy generative AI systems and the information in the list of resources compiled by 0xsyr0 and available on GitHub here. The idea is that one could grab one or more of the malware development resources and do some experimenting with an AI system. My team member said the AmsiHook looked interesting as well as Freeze. Is my team member correct? Allegedly next week he will provide an update at our weekly meeting. My question is, “Do the recent assertions about smart software cover this variant of speed dating?”

Stephen E Arnold, July 27, 2023

Netflix Has a Job Opening. One Job Opening to Replace Many Humanoids

July 27, 2023

Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “As Actors Strike for AI Protections, Netflix Lists $900,000 AI Job.” Obviously the headline is about AI, money, and entertainment. Is the job “real”? Like so much of the output of big companies, it is difficult to determine how much is clickbait, how much is surfing on “real” journalists’ thirst for the juicy info, and how much is trolling? Yep, trolling. Netflix drives a story about AI’s coming to Hollywood.

The write up offers Hollywood verbiage and makes an interesting point:

The [Netflix job] listing points to AI’s uses for content creation:“Artificial Intelligence is powering innovation in all areas of the business,” including by helping them to “create great content.” Netflix’s AI product manager posting alludes to a sprawling effort by the business to embrace AI, referring to its “Machine Learning Platform” involving AI specialists “across Netflix.”

The machine learning platform or MLP is an exercise in cost control, profit maximization, and presaging the future. If smart software can generate new versions of old content, whip up acceptable facsimiles, and eliminate insofar as possible the need for non-elite humans — what’s not clear.

The $900,000 may be code for “Smart software can crank out good enough content at lower cost than traditional Hollywood methods.” Even the TikTok and YouTube “stars” face an interesting choice: [a] Figure out how to offload work to smart software or [b] learn to cope with burn out, endless squabbles with gatekeepers about money, and the anxiety of becoming a has-been.

Will humans, even talented ones, be able to cope with the pressure smart software will exert on the production of digital content? Like the junior attorney and cannon fodder for blue chip consulting companies, AI is moving from spitting out high school essays to more impactful outputs.

One example is the integration of smart software into workflows. The jargon about this enabling use of smart software is fluid. The $900,000 job focuses on something that those likely to be affected can understand: A good enough script and facsimile actors and actresses with a mouse click.

But the embedded AI promises to rework the back office processes and the unseen functions of humans just doing their jobs. My view is that there will be $900K per year jobs but far fewer of them than there are regular workers. What is the future for those displaced?

Crafting? Running yard sales? Creating fine art?

Stephen E Arnold, July 27, 2023

Ethics Are in the News — Now a Daily Feature?

July 27, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

It is déjà vu all over again, or it seems like it. I read “Judge Finds Forensic Scientist Henry Lee Liable for Fabricating Evidence in a Murder Case.” Yep, that is the story. Scientist Lee allegedly has a knack for non-fiction; that is, making up stuff or arranging items in a special way. One of my relatives founded Hartford, Connecticut, in the 1635. I am not sure he would have been on board with this make-stuff-up approach to data. (According to our family lore, John Arnold was into beating people with a stick.) Dr. Lee is a big wheel because he worked on the 1995 running-through-airports trial. The cited article includes this interesting sentence:

[Scientist] Lee’s work in several other cases has come under scrutiny…

7 22 scientist and cookies

No one is watching. A noted scientist helps himself to the cookies in the lab’s cookie jar. He is heard mumbling, “Cookies. I love cookies. I am going to eat as many of these suckers as I can because I am alone. And who cares about anyone else in this lab? Not me.” Chomp chomp chomp. Thanks, MidJourney. You depicted an okay scientist but refused to create an image of a great leader whom I identified by proper name. For this I paid money?

Let me mention three ethics incidents which for one reason or another hit my radar:

  1. MIT accepting cash from every young person’s friend Jeffrey Epstein. He allegedly killed himself. He’s off the table.
  2. The Harvard ethics professor who made up data. She’s probably doing consulting work now. I don’t know if she will get back into the classroom. If she does it might be in the Harvard Business School. Those students have a hunger for information about ethics.
  3. The soon-to-be-departed president of Stanford University. He may find a future using ChatGPT or an equivalent to write technical articles and angling for a gig on cable TV.

What do these allegedly true incidents tell us about the moral fiber of some people in positions of influence? I have a few ideas. Now the task is remediation. When John Arnold chopped wood in Hartford, justice involved ostracism, possibly a public shaming, or rough justice played out to the the theme from Hang ‘Em High.

Harvard, MIT, and Stanford: Aren’t universities supposed to set an example for impressionable young minds? What are the students learning? Anything goes? Prevaricate? Cut corners? Grub money?

Imagine sweatshirts with the college logo and these words on the front and back of the garment. Winner. Some at Amazon, Apple, Facebook, Google, Microsoft, and OpenAI might wear them to the next off-site. I would wager that one turns up in the Rayburn House Office Building wellness room.

Stephen E Arnold, July 27, 2023

AI Leaders and the Art of Misdirection

July 27, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Lately, leaders at tech companies seem to have slipped into a sci-fi movie.

7 22 young liar

“Trust me. AI is really good. I have been working to create a technology which will help the world. I want to make customers you, Senator, trust us. I and other AI executives want to save whales. We want the snail darter to thrive. We want the homeless to have suitable housing. AI will deliver this and more plus power and big bucks to us!” asserts the sincere AI wizard with a PhD and an MBA.

Rein in our algorithmic monster immediately before it takes over the world and destroys us all! But AI Snake Oil asks, “Is Avoiding Extinction from AI Really an Urgent Priority?” Or is it a red herring? Writers Seth Lazar, Jeremy Howard, and Arvind Narayanan consider:

“Start with the focus on risks from AI. This is an ambiguous phrase, but it implies an autonomous rogue agent. What about risks posed by people who negligently, recklessly, or maliciously use AI systems? Whatever harms we are concerned might be possible from a rogue AI will be far more likely at a much earlier stage as a result of a ‘rogue human’ with AI’s assistance. Indeed, focusing on this particular threat might exacerbate the more likely risks. The history of technology to date suggests that the greatest risks come not from technology itself, but from the people who control the technology using it to accumulate power and wealth. The AI industry leaders who have signed this statement are precisely the people best positioned to do just that. And in calling for regulations to address the risks of future rogue AI systems, they have proposed interventions that would further cement their power. We should be wary of Prometheans who want to both profit from bringing the people fire, and be trusted as the firefighters.”

Excellent point. But what, specifically, are the rich and powerful trying to distract us from here? Existing AI systems are already causing harm, and have been for some time. Without mitigation, this problem will only worsen. There are actions that can be taken, but who can focus on that when our very existence is (supposedly) at stake? Probably not our legislators.

Cynthia Murrell, July 27, 2023

Will Smart Software Take Customer Service Jobs? Do Grocery Stores Raise Prices? Well, Yeah, But

July 26, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I have suggested that smart software will eliminate some jobs. Who will be doing the replacements? Workers one finds on Fiverr.com? Interns who will pay to learn something which may be more useful than a degree in art history? RIF’ed former employees who are desperate for cash and will work for a fraction of their original salary?

7 22 robot woman

“Believe it or not, I am here to help you. However, I strong suggest you learn more about the technology used to create software robots and helpers like me. I also think you have beautiful eyes. My are just blue LEDs, but the Terminator finds them quite attractive,” says the robot who is learning from her human sidekick. Thanks, MidJourney, you have the robot human art nailed.

The fact is that smart software will perform many tasks once handled by humans? Don’t believe me. Visit a local body shop. Then take a tour of the Toyota factory not too distant from Tokyo’s airport. See the difference? The local body shop is swarming with folks who do stuff with their hands, spray guns, and machines which have been around for decades. The Toyota factory is not like that.

Machines — hardware, software, or combos — do not take breaks. They do not require vacations. They do not complain about hard work and long days. They, in fact, are lousy machines.

Therefore, the New York Times’s article “Training My Replacement: Inside a Call Center Worker’s Battle with AI”  provides a human interest glimpse of the terrors of a humanoid who sees the writing on the wall. My hunch is that the New York Times’s “real news” team will do more stories like this.

However, it would be helpful to people like to include information such as a reference or a subtle nod to information like this: “There Are 4 Reasons Why Jobs Are Disappearing — But AI Isn’t One of Them.” What are these reasons? Here’s a snapshot:

  • Poor economic growth
  • Higher costs
  • Supply chain issues (real, convenient excuse, or imaginary)
  • That old chestnut: Covid. Boo.

Do I buy the report? I think identification of other factors is a useful exercise. In the short term, many organizations are experimenting with smart software. Few are blessed with senior executives who trust technology when those creating the technology are not exactly sure what’s going on with their digital whiz kids.

The Gray Lady’s “real news” teams should be nervous. The wonderful, trusted, reliable Google is allegedly showing how a human can use Google AI to help humans with creating news.

Even art history major should be suspicious because once a leader in carpetland hears about the savings generated by deleting humanoids and their costs, those bean counters will allow an MBA to install software. Remember, please, that the mantra of modern management is money and good enough.

Stephen E Arnold, July 26, 2023

Hedge Funds and AI: Lovers at First Sight

July 26, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

One promise of AI is that it will eliminate tedious tasks (and the jobs that with them). That promise is beginning to be fulfilled in the investment arena, we learn from the piece, “Hedge Funds Are Deploying ChatGPT to Handle All the Grunt Work,” shared by Yahoo Finance. What could go wrong?

7 22 swim in money

Two youthful hedge fund managers are so pleased with their AI-infused hedge fund tactics, they jumped in a swimming pool which is starting to fill with money. Thanks, MidJourney. You have nailed the happy bankers and their enjoyment of money raining down.

Bloomberg’s Justina Lee and Saijel Kishan write:

“AI on Wall Street is a broad church that includes everything from machine-learning algorithms used to compute credit risks to natural language processing tools that scan the news for trading. Generative AI, the latest buzzword exemplified by OpenAI’s chatbot, can follow instructions and create new text, images or other content after being trained on massive amounts of inputs. The idea is that if the machine reads enough finance, it could plausibly price an option, build a portfolio or parse a corporate news headline.”

Parse the headlines for investment direction. Interesting. We also learn:

“Fed researchers found [ChatGPT] beats existing models such as Google’s BERT in classifying sentences in the central bank’s statements as dovish or hawkish. A paper from the University of Chicago showed ChatGPT can distill bloated corporate disclosures into their essence in a way that explains the subsequent stock reaction. Academics have also suggested it can come up with research ideas, design studies and possibly even decide what to invest in.”

Sounds good in theory, but there is just one small problem (several, really, but let’s focus on just the one): These algorithms make mistakes. Often. (Scroll down in this GitHub list for the ChatGPT examples.) It may be wise to limit one’s investments to firms patient enough to wait for AI to become more reliable.

Cynthia Murrell, July 26, 2023

Google, You Are Constantly Surprising: Planned Obsolescence, Allegations of IP Impropriety, and Gardening Leave

July 25, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I find Google to be an interesting company, possibly more intriguing than the tweeter X outfit. As I zipped through my newsfeed this morning while dutifully riding the exercise machine, I noticed three stories. Each provides a glimpse of the excitement that Google engenders. Let me share these items with you because I am not sure each will get the boost from the tweeter X outfit.

7 25 about google

Google is in the news and causing consternation in the mind of this MidJourney creation. . At least one Google advocate finds the information shocking. Imagine, planned obsolescence, alleged theft of intellectual property, and sending a Googler with a 13 year work history home to “garden.”

The first story comes from Oakland, California. California is a bastion of good living and clear thinking. “Thousands of Chromebooks Are ‘Expiring,’ Forcing Schools to Toss Them Out” explains that Google has designed obsolescence into Chromebooks used in schools. Why? one may ask. Here’s the answer:

Google told OUSD [Oakland Unified School District’ the baked-in death dates are necessary for security and compatibility purposes. As Google continues to iterate on its Chromebook software, older devices supposedly can’t handle the updates.

Yes, security, compatibility, and the march of Googleware. My take is that green talk is PR. The reality is landfill.

The second story is from the Android Authority online news service. One would expect good news or semi-happy information about my beloved Google. But, alas, the story “Google Ordered to Pay $339M for stealing the very idea of Chromecast.” The operative word is “stealing.” Wow. The Google? The write up states:

Google opposed the complaint, arguing that the patents are “hardly foundational and do not cover every method of selecting content on a personal device and watching it on another screen.”

Yep, “hardly,” but stealing. That’s quite an allegation. It begs the question, “Are there any other Google actions which have suggested similar behavior; for example, an architecture-related method, an online advertising process, or alleged misuse of intellectual property? Oh, my.

The third story is a personnel matter. Google has a highly refined human resource methodology. “Google’s Indian-Origin Director of News Laid Off after 13 Years: In Privileged Position” reveals as actual factual:

Google has sent Chinnappa on a “gardening leave…

Ah, ha, Google is taking steps to further its green agenda. I wonder if the “Indian origin Xoogler” will dig a hole and fill it with Chromebooks from the Oakland school district.

Amazing, beloved Google. Amazing.

Stephen E Arnold, July 25, 2023

Harvard Approach to Ethics: Unemployed at Stanford

July 25, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I attended such lousy schools no one bothered to cheat. No one was motivated. No parents cared. It was a glorious educational romp because the horizons for someone in a small town in the dead center of Illinois was going nowhere. The proof? Visit a small town in Illinois and what do you see? Not much. Think of Cairo, Illinois, as a portent. In the interest of full disclosure, I did sell math and English homework to other students in that intellectual wasteland. Now you know how bad my education was. People bought “knowledge” from me. Go figure.

7 20 cheating student

“You have been cheating,” says the old-fashioned high school teacher. The student who would rise to fame as a brilliant academician and consummate campus politician replies, “No, no, I would never do such a thing.” The student sitting next to this would-be future beacon of proper behavior snarls, “Oh, yes you were. You did not read Aristotle Ethics, so you copied exactly what I wrote in my blue book. You are disgusting. And your suspenders are stupid.”

But in big name schools, cheating apparently is a thing. Competition is keen. The stakes are high. I suppose that’s why an ethic professor at Harvard made some questionable decisions. I thought that somewhat scandalous situation would have motivated big name universities to sweep cheating even farther under the rug.

But no, no, no.

The Stanford student newspaper — presumably written by humanoid students awash with Phil Coffee — wrote “Stanford President Resigns over Manipulated Research, Will Retract at Least Three Papers.” The subtitle is cute; to wit:

Marc Tessier-Lavigne failed to address manipulated papers, fostered unhealthy lab dynamic, Stanford report says

Okay, this respected leader and thought leader for the students who want to grow up to be just like Larry, Sergey, and Peter, among other luminaries, took some liberties with data.

The presumably humanoid-written article reports:

Tessier-Lavigne defended his reputation but acknowledged that issues with his research, first raised in a Daily investigation last autumn, meant that Stanford requires a president “whose leadership is not hampered by such discussions.”

I am confident reputation management firms and a modest convocation of legal eagles will explain this Harvard-echoing matter. With regard to the soon-to-be former president, I really don’t care about him, his allegedly fiddled research, and his tear-inducing explanation which will appear soon.

Here’s what I care about:

  1. Is it any wonder why graduates of Stanford University — plug in your favorite Sillycon Valley wizard who graduated from the prestigious university — finds trust difficult to manifest? I don’t. I am not sure “trust”, excellence, and Stanford are words that can nest comfortably on campus.
  2. Is any academic research reproducible? I know that ballpark estimates suggest that as much as 40 percent of published research may manifest the tiny problem of duplicating the results? Is it time to think about what actions are teaching students what’s okay and what’s not?
  3. Does what I shall call “ethics rot” extend outside of academic institutions? My hunch is that big time universities have had some challenges with data in the past. No one bothered to check too closely. I know that the estimable William James looked for mistakes in the writings of those who disagreed with radical empiricism stuff, but today? Yeah, today.

Net net: Ethical rot, not academic excellence, seems to be a growth business. Now what Stanford graduates’ business have taken ethical short cuts to revenue? I hear crickets.

PS. Is it three, five, or an unknown number of papers with allegedly fakey wakey information? Perhaps the Stanford humanoids writing the article were hallucinating when working with the number of fiddled articles? Let’s ask Bard. Oh, right, a Stanford-infused service. The analogy is an institution as bereft as pathetic Cairo, Illinois. Check out some pictures here.

Stephen E Arnold, July 25, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta