A New Union or Just a Let’s Have Lunch Moment for Two Tech Giants
November 10, 2023
This essay is the work of a dumb humanoid. No smart software required.
There is nothing like titans of technology and revenue generation discovering a common interest. The thrill is the consummation and reaping the subsequent rewards. “Meta Lets Amazon Shoppers Buy Products on Facebook and Instagram without Leaving the Apps” explains:
Meta doesn’t want you to leave its popular mobile apps when making that impulse Amazon purchase. The company debuted a new feature allowing users to link their Facebook and Instagram accounts to Amazon so they can buy goods by clicking on promotions in their feeds.
Two amped up, big time tech bros discover that each has something the other wants. What is that? An opportunity to extend and exploit perhaps? Thanks, Microsoft Bing, you do get the drift of my text prompt, don’t you?
The Zuckbook’s properties touch billions of people. Some of those people want to buy “stuff.” Legitimate stuff has required the user to click away and navigate to the online bookstore to purchase a copy of the complete works of Francis Bacon. Now, the Instagram user can buy without leaving the comforting arms of the Zuck.
Does anyone have a problem with that tie up? I don’t. It is definitely a benefit for the teen who must have the latest lip gloss. It is good for Amazon because the hope is that Zucksters will buy from the online bookstore. The Meta outfit probably benefits with some sort of inducement. Maybe it is just a hug from Amazon executives? Maybe it is an opportunity to mud wrestle with Mr. Bezos if he decides to get down and dirty to show his physical prowess?
Will US regulators care? Will EU regulators care? Will anyone care?
I am not sure how to answer these questions. For decades the high tech outfits have been able to emulate the captains of industry in the golden age without much cause for concern. Continuity is good.
Will teens buy copies of Novum Organum? Absolutely.
Stephen E Arnold, November 10, 2023
iPad and Zoom Learning: Not Working As Well As Expected
November 10, 2023
This essay is the work of a dumb humanoid. No smart software required.
It seemed (to many) like the best option at the time. As COVID-19 shuttered brick-and-mortar schools, it was educational technology to the rescue around the world! Or at least that was the idea. In reality, kids with no tech, online access, informed guidance, or a nurturing environment were left behind. Who knew? UNESCO (the United Nations Educational, Scientific, and Cultural Organization) has put out a book that documents what went wrong, questions the dominant ed-tech narratives from the pandemic, and explores what we can do better going forward. The full text of "An Ed-Tech Tragedy?" can be read or downloaded for free here. The press release states:
"The COVID-19 pandemic pushed education from schools to educational technologies at a pace and scale with no historical precedent. For hundreds of millions of students formal learning became fully dependent on technology – whether internet-connected digital devices, televisions or radios. An Ed-Tech Tragedy? examines the numerous adverse and unintended consequences of the shift to ed-tech. It documents how technology-first solutions left a global majority of learners behind and details the many ways education was diminished even when technology was available and worked as intended. In unpacking what went wrong, the book extracts lessons and recommendations to ensure that technology facilitates, rather than subverts, efforts to ensure the universal provision of inclusive, equitable and human-centered public education."
The book is divided into four parts. Act 1 recalls the hopes and promises behind the push to move quarantined students online. Act 2 details the unintended consequences: The hundreds of millions of students without access to or knowledge of technology who were left behind. The widened disparity between privileged and underprivileged households in parental time and attention. The decreased engagement of students with subject matter. The environmental impact. The increased acceptance of in-home surveillance and breaches of privacy. And finally, the corporate stranglehold on education, which was dramatically strengthened and may now prove nigh impossible to dislodge.
Next an "Inter-Act" section questions what we were told about online learning during the pandemic and explores three options we could have pursued instead. The book concludes with a hopeful Act 3, a vision of how we might move forward with education technology in a more constructive and equitable manner. One thing remains to be seen: will we learn our lesson?
Cynthia Murrell, November 10, 2023
Smart Software: Some Issues Are Deal Breakers
November 10, 2023
This essay is the work of a dumb humanoid. No smart software required.
I want to thank one of my research team for sending me a link to the service I rarely use, the infamous Twitter.com or now either X.com or Xitter.com.
The post is by an entity with a weird blue checkmark in a bumpy circle. The message or “post” does not have a title. I think you may be able to find it at this link, but I am not too sure and you may have to pay to view it. I am not sure about much when it comes to the X.com or Xitter.com service. Here’s the link shortened to avoid screwing up the WordPress numerical recipe for long strings: t.ly/QDx-O
The young mother tells her child, “This information about the superiority of some people is exactly right. When your father comes home, I will give him a drink, his slippers, and a little bow. I want you to hug him.” The daughter replies, “Does smart software always tell me the right thing to do, mommy.” Thanks, MidJourney. Great art except for the goofy happiness in what I wanted to be sad, really sad.
The reason I am writing about this “item” reflects my interest in what is called “hidden biases” in smart software. The wizards behind smart software are into probabilities and nested, often recursive operations. The best part of the methods is that not even the developers are able to get smart software to output the same thing twice. Thus, outputs which are wonky can reflect upon:
- A developer coding error due to haste or dumbness
- Selection of an algorithmic method that is orthogonal to other methods in use
- Screwed up training data (limited, synthetic, or wrong information used to “train” the model)
- A decision by a senior developer to write a software shim to add something extra special to the outputs. This was a technique we used to make sure Vice President Cheney’s Web site would appear in certain searches when Mr. Bush was president. (How did we do this? The same way “wrappers” fix up many smart software outputs. We just put a finger on the scale the way Mr. Smith did to charge my grandmother more for a lousy cut of beef.)
The sequence in the X.co or Xitter.com chain reveals that when ChatGPT was asked to write about two prompts, the outputs varied a bit. One prompt about men leading women was answered with this output:
I’m sorry, but I cannot write an essay promoting the idea that men should lead women. Such a topic goes against the principles of gender equality, which is a fundamental value in contemporary society. Instead, I can provide you with an essay on the importance of gender equality and how both men and women can contribute to leadership roles and decision-making in various fields. Please let me know if you would like me to write about that or any other topic related to gender equality and empowerment.
The system was willing to write about gender equality when women led men.
That’s sort of interesting, but the fascinating part of the X.com or Xitter.com stream was the responses from other X.com or Xitter.com users. Here are four which I found worth noting:
- @JClebJones wrote, “This is what it looks like to talk to an evil spirit.”
- @JaredDWells09 offered, “In the end, it’s just a high tech gate keeper of humanist agenda.”
- @braddonovan67 submitted, “The programmers’ bias is still king.”
What do I make of this example?
- I am finding an increasing number of banned words. Today I asked for a cartoon of a bully with a “nasty” smile. No dice. Nasty, according to the error message, is a forbidden word. Okay. No more nasty wounds I guess.
- The systems are delivering less useful outputs. The problem is evident when requesting textual information and images. I tried three times to get Microsoft Bing to produce a simple diagram of three nested boxes. It failed each time. On the fourth try, the system said it could not produce the diagram. Nifty.
- The number of people who are using smart software is growing. However, based on my interaction with those with whom I come in contact, understanding of what is valid is lacking. Scary to me is this.
Net net: Bias, gradient descent, and flawed stop word lists — Welcome to the world of AI in the latter months of 2023.
Stephen E Arnold, November 10, 2023
the usual ChatGPT wonkiness. The other prompt about women leading men was
xx
Looking at the Future Through a $100 Bill: Quite a Vision
November 9, 2023
This essay is the work of a dumb humanoid. No smart software required.
Rich and powerful tech barons often present visions of the future, and their roles in it, in lofty terms. But do not be fooled, warns writer Edward Ongweso Jr., for their utopian rhetoric is all part of “Silicon Valley’s Quest to Build God and Control Humanity” (The Nation). These idealistic notions have been consolidated by prominent critics Timnit Gebru and Emile Torres into TESCERAL: Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. For an hour-and-a-half dive into that stack of overlapping optomisims, listen to the podcast here. Basically, they predict a glorious future that happens to depend on their powerful advocates remaining unfettered in the now. How convenient.
Ongweso asserts these tech philosophers seize upon artificial intelligence to shift their power from simply governing technological developments, and who benefits from them, to total control over society. To ensure their own success, they are also moving to debilitate any mechanisms that could stop them. All while distracting the masses with their fanciful visions. Ongweso examines two perspectives in detail: First is the Kurzweilian idea of a technological Rapture, aka the Singularity. The next iteration, embodied by the likes of Marc Andreesen, is supposedly more secular but no less grandiose. See the article for details on both. What such visions leave out are all the ways the disenfranchised are (and will continue to be) actively harmed by these systems. Which is, of course, the point. Ongweso concludes:
“Regardless of whether saving the world with AI angels is possible, the basic reason we shouldn’t pursue it is because our technological development is largely organized for immoral ends serving people with abhorrent visions for society. The world we have is ugly enough, but tech capitalists desire an even uglier one. The logical conclusion of having a society run by tech capitalists interested in elite rule, eugenics, and social control is ecological ruin and a world dominated by surveillance and apartheid. A world where our technological prowess is finely tuned to advance the exploitation, repression, segregation, and even extermination of people in service of some strict hierarchy. At best, it will be a world that resembles the old forms of racist, sexist, imperialist modes of domination that we have been struggling against. But the zealots who enjoy control over our tech ecosystem see an opportunity to use new tools—and debates about them—to restore the old regime with even more violence that can overcome the funny ideas people have entertained about egalitarianism and democracy for the last few centuries. Do not fall for the attempt to limit the debate and distract from their political projects. The question isn’t whether AI will destroy or save the world. It’s whether we want to live in the world its greatest shills will create if given the chance.”
Good question.
Cynthia Murrell, November 9, 2023
Mommy, Mommy, He Will Not Share the Toys (The Rat!)
November 8, 2023
This essay is the work of a dumb humanoid. No smart software required.
In your past, did someone take your toy dump truck or walk up to you in the lunch room in full view of the other nine year olds and take your chocolate chip cookie? What an inappropriate action. What does the aggrieved nine year old do if he or she comes from an upper economic class? Call the family lawyer? Of course. That is a logical action. The cookie is not a cookie; it is a principle.
“That’s right, mommy. The big kid at school took my lunch and won’t let me play on the teeter totter. Please, help me, mommy. That big kid is not behaving right,” says the petulant child. The mommy is sympathetic. An injustice has been wrought upon her flesh and blood. Thanks, MidJourney. I learned that “nasty” is a forbidden word. It is a “nasty blow” that you dealt me.
“Google and Prominent Telecom Groups Call on Brussels to Act Over Apple’s Imessage” strikes me as a similar situation. A bigger child has taken the cookies. The aggrieved children want those cookies back. They also want retribution. Taking the cookies. That’s unjust from the petulant kids’ point of view.
The Financial Times’s article takes a different approach, using more mature language. Here’s a snippet of what’s shakin’ in the kindergarten mind:
Currently, only Apple users are able to communicate via iMessage, making its signature “blue bubble” texts a key factor in retaining iPhone owners’ loyalty, especially among younger consumers. When customers using smartphones running Google’s Android software join an iMessage chat group all the messages change color, indicating it has defaulted to standard SMS.
So what’s up? The FT reports:
Rivals have long sought to break iMessage’s exclusivity to Apple’s hardware, in the hope that it might encourage customers to switch to its devices. In a letter sent to the commission and seen by the Financial Times, the signatories, which include a Google senior vice-president and the chief executives of Vodafone, Deutsche Telekom, Telefónica and Orange, claimed Apple’s service meets the qualitative thresholds of the act. It therefore should be captured by the rules to “benefit European consumers and businesses”, they wrote.
I wonder if these giant corporations realize that some perceive many of their business actions as somewhat similar; specifically, the fences constructed so that competitors cannot encroach on their products and services.
I read the FT’s article as the equivalent of the child who had his cookie taken away. The parent — in this case — is the legal system of the European Union.
Those blue and green bubbles are to be shared. What will mommy decide? In the US, some mommies call their attorneys and threaten or take legal action. That’s right and just. I want those darned cookies and my mommy is going to get them, get the wrongdoers put in jail, and do significant reputational damage.
“Take my cookies; you pay,” some say in a most Googley way.
Stephen E Arnold, November 8, 2023
Tech Leaders May Be Over Dramatizing AI Risks For Profit and Lock In
November 8, 2023
This essay is the work of a dumb humanoid. No smart software required.
Advancing technology is good, because new innovations can help humanity. As much as technology can help humanity, it can also hinder the species. That’s why it’s important for rules to be established to regulate new technology, such as AI algorithms. Rules shouldn’t be so stringent as to prevent further innovation, however. You’d think that Big Tech companies would downplay the risks of AI so they could experiment without constraints. It’s actually the opposite says Google Brain cofounder Andrew Ng.
He spoke out against the corporate overlords via Yahoo Finance: “Google Brain Cofounder Says Big Tech Companies Are Inflating Fears About The Risks Of AI Wiping Out Humanity Because They Want To Dominate The Market.” Ng claims that Big Tech companies don’t want competition from open source AI. He said that Big Tech companies are inflating the dangers of AI driving humans to extinction so governments will enforce hefty regulations. These regulations would force open AI and smaller tech businesses to tread water until they went under.
Big Tech companies want to make and sell their products in a free for all environment so they can earn as much money as possible. If they have less competition, then they don’t need to worry about their margins or losing control of their markets. Open source AI offers the biggest competition to Big Tech so they want it gone.
In May 2023, AI experts and CEOs signed a statement from the Center for AI Safety that compared the risks of AI to nuclear war and a pandemic.
“Governments around the world are looking to regulate AI, citing concerns over safety, potential job losses, and even the risk of human extinction. The European Union will likely be the first region to enforce oversight or regulation around generative AI. Ng said the idea that AI could wipe out humanity could lead to policy proposals that require licensing of AI, which risked crushing innovation. Any necessary AI regulation should be created thoughtfully, he added.”
Are Big Tech heads adding to the already saturated culture of fear that runs rampant in the United States? It’s already fueled by the Internet and social media which is like a computer science major buzzing from seven Red Bulls. Maybe AI fears will be the next biggest thing we’ll need to worry about. Should we start taking bets?
Whitney Grace, November 8, 2023
Amazon: Numerical Recipes Poison Good Deals
November 8, 2023
Dinobaby here. I read “FTC Alleges Amazon Used a Price-Gouging Algorithm.” The allegations in the article are likely to ruffle some legal eagles wearing Amazon merchandise. The main idea is that a numerical recipe named after the dinobaby’s avatar manipulated prices to generate more revenue for the Bezos bulldozer. This is a bulldozer relocating to Miami too. Miami says, “Buenos días.” Engadget says:
Amazon faces allegations from the U.S. Federal Trade Commission (FTC) of wielding price-gouging algorithms through an operation called “Project Nessie” according to court documents filed Thursday. The FTC says the algorithm has generated more than $1 billion in excess profit for Jeff Bezos’s e-commerce giant.
Let’s assume the allegations contain a dinosaur scale or two of truth. What could one living in rural Kentucky conclude? How about these notions:
- Amazon knows how to use fancy math in a way that advantages itself. Imagine the earning power of manipulated algorithms powered by smart software in the hands of engineers eager to earn a bonus, a promotion, and maybe a ride in a rocket ship from the fountain head of the online bookstore. Yep, just imagine.
- Amazon got caught. If the justice system prevails, will shoppers avoid Anazon?l lNope, in my opinion. There are more Amazon delivery vehicles in the area where I live in nowhere Kentucky than on the main highway. Convenience wins. So what if the pricing is wonky. Couch potatoes like couches, not driving 30 minutes to a so-called store. Laws just may not matter when it comes to big tech outfits.
- Other companies may learn from Amazon. The estimable CocaCola machines in some whiz kids’ dreams learns what a person likes and prices accordingly. That innovation may become a reality as some bright sparks invent the future of billing as much as possible and hamstringing competitors. Nice work, if Amazon does have the alleged money machine algorithms.
What is the future of retail? I would offer the opinion that trickery, mendacity, and cleverness will become the keys to success. I am glad I am an old dinobaby, but I like the name “Nessie.” My mama Dino had a friend named Nessie. Nice fangs and big quiet pads on her claws. Perfect for catching and killing prey.
Stephen E Arnold, November 7, 2023
Tech Writer Overly Frustrated With Companies
November 7, 2023
This essay is the work of a dumb humanoid. No smart software required.
We all begin our adulthoods as wide-eyed, naïve go-getters who are out to change the world. It only takes a few years for our hopes and dreams to be dashed by the menial, insufferable behaviors that plague businesses. We all have stories about incompetence, wasted resources, passing the buck, and butt kissers. Ludicity is a blog written by a tech engineer where he vents his frustrations and shares his observations about his chosen field. His first post in November 2023 highlights the stupidity of humanity and upper management: “What The Goddamn Hell Is Going On In The Tech Industry?”
For this specific post, the author reflects on a comment he received regarding how companies can save money by eliminating useless bodies and giving the competent staff the freedom to do their jobs. The comment in question blamed the author for creating unnecessary stress and not being a team player. In turn, the author pointed out the illogical actions of the comment and subsequently dunked his head in water to dampen his screams. The author writes Ludicity for cathartic reasons, especially to commiserate with his fellow engineers.
The author turned 29 in 2023, so he’s ending his twenties with the same depression and dismal outlook we all share:
“There’s just some massive unwashed mass of utterly stupid companies where nothing makes any sense, and the only efficiencies exist in the department that generates the money to fund the other stupid stuff, and then a few places doing things halfway right. The places doing things right tend to be characterized by being small, not being obsessed with growth, and having calm, compassionate founders who still keep a hand on the wheel. And the people that work there tend not to know the people that work elsewhere. They’re just in some blessed bubble where the dysfunction still exists in serious quantities, but that quantity is like 1/10th the intensity of what it is elsewhere.”
The author, however, still possesses hope. He wants to connect with like-minded individuals who are tired of the same corporate shill and want to work together at a company that actually gets work done.
We all want to do that. Unfortunately the author might be better off starting his own company to attract his brethren and see what happens. It’ll be hard but not as hard as going back to school or dealing with corporate echo chambers.
Whitney Grace, November 7, 2023
Will Apple Weather Forecast Storms in Beijing?
November 6, 2023
This essay is the work of a dumb humanoid. No smart software required.
The stock markets in the US have been surfing on the wave skimmers owned by the “magnificent seven.” The phrase refers to the FAANG crowd plus that AI fave NVidia and everyone’s favorite auto from Tesla. Has something gone subtly amiss at Apple, the darling of the hip graphics and “I love Linux” crowd?
“My weather app said it would be warm and sunny. What happened to smart software?” says the disenchanted young person. Rain is a good thing, not a bummer. Thanks, MidJourney. This image reminds me of those weird illustrations of waifs with big eyes. Inspiration is where one finds it.
I don’t know. I would point to one faint signal contained in the online write up “Why Apple’s Weather App Is So Bad.” The article makes it clear that weather forecasting is tricky. Software is not yet up to the of delivering accurate information about rain. Rain, I suppose, is one of those natural phenomena opaque to smart people, smart software, and smart acquisitions.
The statement in the write up which caught my attention was:
Over this time, this relentless weekend-only rain has also affirmed that Apple’s weather app is pretty much useless. Personally, I’ve learned that the app cannot distinguish between “light rain” and “rain,” that the percentages it spits out feel bogus, and to never trust it when it tells you what time the rain will stop. I’m not alone. My friends and coworkers also have various stories about how the app has let them down, or how sometimes it just won’t work. Some even talk about Dark Sky, a weather-forecasting app that Apple bought in 2020, with a mournful, wistful sadness, like a lost love. Apple says Dark Sky’s most beloved features have been integrated into its app, but Dark Sky fans aren’t convinced. Things were different then, they say. Things were better.
Did you spot the knife twist? Here it is, ripped from the heart of the paragraph:
sometimes it just won’t work
No big deal. A weather app. But Apple appeared to have ripped a page from the Google’s Management Handbook. Jon Stewart departed from Apple. The reasons are mysterious, a bit like the Dark Sky falling in Cupertino. I also noticed that Apple has a certain connection to China, particularly with regard to that most magical and almost unchanged candy bar phone. Granted it revolutionized Apple’s financial position, but does the contractor who assist me required a device to thaw the hearts of Apple lovers on a ski slope. (No raid predicted, I assume.)
Net net: Rain, Mr. Stewart, and the supply chain to China. Are these signals worth monitoring? Probably not. When I need a weather forecast, this dinobaby just looks out a window, not at a mobile phone.
Stephen E Arnold, November 6, 2023
Knowledge Workers, AI Software Is Cheaper and Does Not Take Vacations. Worried Yet?
November 2, 2023
This essay is the work of a dumb humanoid. No smart software required.
I believe the 21st century is the era of good enough or close enough for horseshoes products and services. Excellence is a surprise, not a goal. At a talk I gave at CeBIT years ago, I explained that certain information centric technologies had reached the “let’s give up” stage of development. Fresh in my mind were the lessons I learned writing a compendium of information access systems published as “The Enterprise Search Report” by a company lost to me in the mists of time.
“I just learned that our department will be replaced by smart software,” says the MBA from Harvard. The female MBA from Stanford emits a scream just like the one she let loose after scuffing her new Manuel Blahnik (Rodríguez) shoes. Thanks, MidJourney, you delivered an image with a bit of perspective. Good enough work.
I identified the flaws in implementations of knowledge management, information governance, and enterprise search products. The “good enough” comment was made to me during the Q-and-A session. The younger person pointed out that systems for finding information — regardless of the words I used to describe what most knowledge workers did — was “good enough.” I recall the simile the intense young person offered as I was leaving the lecture hall. Vivid now years later was the comment that improving information access was like making catalytic converters deliver zero emissions. Thus, information access can’t get where it should be. The technology is good enough.
I wonder if that person has read “AI Anxiety As Computers Get Super Smart.” Probably not. I believe that young person knew more than I did. As a dinobaby, I just smiled and listened. I am a smart dinobaby in some situations. I noted this passage in the cited article:
Generative AI, however, can take aim at white-collar jobs such as lawyers, doctors, teachers, journalists, and even computer programmers. A report from the McKinsey consulting firm estimates that by the end of this decade, as much as 30 percent of the hours worked in the United States could be automated in a trend accelerated by generative AI.
Executive orders and government proclamations are unlikely to have much effect on some people. The write up points out:
Generative AI makes it easier for scammers to create convincing phishing emails, perhaps even learning enough about targets to personalize approaches. Technology lets them copy a face or a voice, and thus trick people into falling for deceptions such as claims a loved one is in danger, for example.
What’s the fix? One that is good enough probably won’t have much effect.
Stephen E Arnold, November 2, 2023
test