AI Innovation: Do Just Big Dogs Get the Fat, Farmed Salmon?

March 20, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Let’s talk about statements like “AI will be open source” and “AI has spawned hundreds, if not thousands, of companies.” Those are assertions which seem to be slightly different from what’s unfolding at some of the largest technology outfits in the world. The circling and sniffing allegedly underway between the Apple and the Google pack is interesting. Apple and Google have a relationship, probably one that will need marriage counselor, but it is a relationship.

image

The wizard scientists have created an interesting digital construct. Thanks, MSFT Copilot. How are you coming along with your Windows 11 updates and Azure security today? Oh, that’s too bad.

The news, however, is that Microsoft is demonstrating that it wants to eat the fattest salmon in the AI stream. Microsoft has a deal of some type with OpenAI, operating under the steady hand of Sam AI-Man. Plus the Softies have cozied up to the French outfit Mistral. Today at 530 am US Eastern I learned that Microsoft has embraced an outstanding thinker, sensitive manager, and pretty much the entire Inflection AI outfit.

The number of stories about this move reflect the interest in smart software and what may be one of world’s purveyor of software which attracts bad actors from around the world. Thinking about breaches in the new Microsoft world is not a topic in the write ups about this deal. Why? I think the management move has captured attention because it is surprising, disruptive, and big in terms of money and implications.

Microsoft Hires DeepMind Co-Founder Suleyman to Run Consumer AI” states:

DeepMind workers complained about his [former Googler Mustafa Suleyman and subsequent Inflection.ai senior manager] management style, the Financial Times reported. Addressing the complaints at the time, Suleyman said: “I really screwed up. I was very demanding and pretty relentless.” He added that he set “pretty unreasonable expectations” that led to “a very rough environment for some people. I remain very sorry about the impact that caused people and the hurt that people felt there.” Suleyman was placed on leave in 2019 and months later moved to Google, where he led AI product management until exiting in 2022.

Okay, a sensitive manager learns from his mistakes joins Microsoft.

And Microsoft demonstrates that the AI opportunity is wide open. “Why Microsoft’s Surprise Deal with $4 Billion Startup Inflection Is the Most Important Non-Acquisition in AI” states:

Even since OpenAI launched ChatGPT in November 2022, the tech world has been experiencing a collective mania for AI chatbots, pouring billions of dollars into all manner of bots with friendly names (there’s Claude, Rufus, Poe, and Grok — there’s event a chatbot name generator). In January, OpenAI launched a GPT store that’s chock full of bots. But how much differentiation and value can these bots really provide? The general concept of chatbots and copilots is probably not going away, but the demise of Pi may signal that reality is crashing into the exuberant enthusiasm that gave birth to a countless chatbots.

Several questions will be answered in the weeks ahead:

  1. What will regulators in the EU and US do about the deal when its moving parts become known?
  2. How will the kumbaya evolve when Microsoft senior managers, its AI partners, and reassigned Microsoft employees have their first all-hands Teams or off-site meeting?
  3. Does Microsoft senior management have the capability of addressing the attack surface of the new technologies and the existing Microsoft software?
  4. What happens to the AI ecosystem which depends on open source software related to AI if Microsoft shifts into “commercial proprietary” to hit revenue targets?
  5. With multiple AI systems, how are Microsoft Certified Professional agents going to [a] figure out what broke and [b] how to fix it?
  6. With AI the apparent “next big thing,” how will adversaries like nations not pals with the US respond?

Net net: How unstable is the AI ecosystem? Let’s ask IBM Watson because its output is going to be as useful as any other in my opinion. My hunch is that the big dogs will eat the fat, farmed salmon. Who will pull that lucious fish from the big dog’s maw? Not me.

Stephen E Arnold, March 20, 2024

The TikTok Flap: Wings on a Locomotive?

March 20, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I find the TikTok flap interesting. The app was purposeless until someone discovered that pre-teens and those with similar mental architecture would watch short videos on semi-forbidden subjects; for instance, see-through dresses, the thrill of synthetic opioids, updating the Roman vomitorium for a quick exit from parental reality, and the always-compelling self-harm presentations. But TikTok is not just a content juicer; it can provide some useful data in its log files. Cross correlating these data can provide some useful insights into human behavior. Slicing geographically makes it possible to do wonderful things. Apply some filters and a psychological profile can be output from a helpful intelware system. Whether these types of data surfing take place is not important to me. The infrastructure exists and can be used (with or without authorization) by anyone with access to the data.

image

Like bird wings on a steam engine, the ban on TikTok might not fly. Thanks, MSFT Copilot. How is your security revamp coming along?

What’s interesting to me is that the US Congress took action to make some changes in the TikTok business model. My view is that social media services required pre-emptive regulation when they first poked their furry, smiling faces into young users’ immature brains. I gave several talks about the risks of social media online in the 1990s. I even suggested remediating actions at the open source intelligence conferences operated by Major Robert David Steele, a former CIA professional and conference entrepreneur. As I recall, no one paid any attention. I am not sure anyone knew what I was talking about. Intelligence, then, was not into the strange new thing of open source intelligence and weaponized content.

Flash forward to 2024, after the US government geared up to “ban” or “force ByteDance” to divest itself of TikTok, many interesting opinions flooded the poorly maintained and rapidly deteriorating information highway. I want to highlight two of these write ups, their main points, and offer a few observations. (I understand that no one cared 30 years ago, but perhaps a few people will pay attention as I write this on March 16, 2024.)

The first write up is “A TikTok Ban Is a Pointless Political Turd for Democrats.” The language sets the scene for the analysis. I think the main point is:

Banning TikTok, but refusing to pass a useful privacy law or regulate the data broker industry is entirely decorative. The data broker industry routinely collects all manner of sensitive U.S. consumer location, demographic, and behavior data from a massive array of apps, telecom networks, services, vehicles, smart doorbells and devices (many of them *gasp* built in China), then sells access to detailed data profiles to any nitwit with two nickels to rub together, including Chinese, Russian, and Iranian intelligence. Often without securing or encrypting the data. And routinely under the false pretense that this is all ok because the underlying data has been “anonymized” (a completely meaningless term). The harm of this regulation-optional surveillance free-for-all has been obvious for decades, but has been made even more obvious post-Roe. Congress has chosen, time and time again, to ignore all of this.

The second write up is “The TikTok Situation Is a Mess.” This write up eschews the colorful language of the TechDirt essay. Its main point, in my opinion, is:

TikTok clearly has a huge influence over a massive portion of the country, and the company isn’t doing much to actually assure lawmakers that situation isn’t something to worry about.

Thus, the article makes clear its concern about the outstanding individuals serving in a representative government in Washington, DC, the true home of ethical behavior in the United States:

Congress is a bunch of out-of-touch hypocrites.

What do I make of these essays? Let me share my observations:

  1. It is too late to “fix up” the TikTok problem or clean up the DC “mess.” The time to act was decades ago.
  2. Virtual private networks and more sophisticated “get around” technology will be tapped by fifth graders to the short form videos about forbidden subjects can be consumed. How long will it take a savvy fifth grader to “teach” her classmates about a point-and-click VPN? Two or three minutes. Will the hungry minds recall the information? Yep.
  3. The idea that “privacy” has not been regulated in the US is a fascinating point. Who exactly was pro-privacy in the wake of 9/11? Who exactly declined to use Google’s services as information about the firm’s data hoovering surfaced in the early 2000s? I will not provide the answer to this question because Google’s 90 percent plus share of the online search market presents the answer.

Net net: TikTok is one example of a software with a penchant for capturing data and retaining those data in a form which can be processed for nuggets of information. One can point to Alibaba.com, CapCut.com, Temu.com or my old Huawei mobile phone which loved to connect to servers in Singapore until our fiddling with the device killed it dead. Sad smile

Stephen E Arnold, March 20, 2024

Humans Wanted: Do Not Leave Information Curation to AI

March 20, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Remember RSS feeds? Before social media took over the Internet, they were the way we got updates from sources we followed. It may be time to dust off the RSS, for it is part of blogger Joan Westenberg’s plan to bring a human touch back to the Web. We learn of her suggestions in, “Curation Is the Last Best Hope of Intelligent Discourse.”

Westenberg argues human judgement is essential in a world dominated by AI-generated content of dubious quality and veracity. Generative AI is simply not up to the task. Not now, perhaps not ever. Fortunately, a remedy is already being pursued, and Westenberg implores us all to join in. She writes:

“Across the Fediverse and beyond, respected voices are leveraging platforms like Mastodon and their websites to share personally vetted links, analysis, and creations following the POSSE model – Publish on your Own Site, Syndicate Elsewhere. By passing high-quality, human-centric content through their own lens of discernment before syndicating it to social networks, these curators create islands of sanity amidst oceans of machine-generated content of questionable provenance. Their followers, in turn, further syndicate these nuggets of insight across the social web, providing an alternative to centralised, algorithmically boosted feeds. This distributed, decentralised model follows the architecture of the web itself – networks within networks, sites linking out to others based on trust and perceived authority. It’s a rethinking of information democracy around engaged participation and critical thinking from readers, not just content generation alone from so-called ‘influencers’ boosted by profit-driven behemoths. We are all responsible for carefully stewarding our attention and the content we amplify via shares and recommendations. With more voices comes more noise – but also more opportunity to find signals of truth if we empower discernment. This POSSE model interfaces beautifully with RSS, enabling subscribers to follow websites, blogs and podcasts they trust via open standard feeds completely uncensored by any central platform.”

But is AI all bad? No, Westenberg admits, the technology can be harnessed for good. She points to Anthropic‘s Constitutional AI as an example: it was designed to preserve existing texts instead of overwriting them with automated content. It is also possible, she notes, to develop AI systems that assist human curators instead of compete with them. But we suspect we cannot rely on companies that profit from the proliferation of shoddy AI content to supply such systems. Who will? People with English majors?

Cynthia Murrell, March 20, 2024

Software Failure: Why Problems Abound and Multiply Like Gerbils

March 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Why Software Projects Fail” after a lunch at which crappy software and lousy products were a source of amusement. The door fell off what?

What’s interesting about the article is that it contains a number of statements which resonated with me. I recommend the article, but I want to highlight several statements from the essay. These do a good job of explaining why small and large projects go off the rails. Within the last 12 months I witnessed one project get tangled in solving a problem that existed 15 years ago. Today not so much. The team crafted the equivalent of a Greek Corinthian helmet from the 8th century BCE. Another project infused with AI and vision of providing a “new” approach to security wobble between and among a telecommunications approach, an email approach, and an SMS approach with bells and whistles only a science fiction fan would appreciate. Both of these examples obtained funding; neither set out to build a clown car. What happened? That’s where “Why Projects Fail?” becomes relevant.

image

Thanks, MSFT Copilot. You have that MVP idea nailed with the recent Windows 11 update, don’t you. Good enough I suppose.

Let’s look at three passages from the essay, shall we?

Belief in One’s Abilities or I Got an Also-Participated Ribbon in Middle School

Here’s the statement from the essay:

One of the things that I’ve noticed is that developers often underestimate not just the complexity of tasks, but there’s a general overconfidence in their abilities, not limited by programming:

  1. Overconfidence in their coding skills.
  2. Overconfidence in learning new technologies.
  3. Overconfidence in our abstractions.
  4. Overconfidence in external dependencies, e.g., third-party services or some open-source library.

My comment: Spot on. Those ribbons built confidence, but they mean nothing.

Open Source Is Great Unless It Has Been Screwed Up, Become a Malware Delivery Vehicle, or Just Does Not Work

Here’s the statement from the essay:

… anything you do not directly control is a risk of hidden complexity. The assumption that third-party services, libraries, packages, or APIs will work as expected without bugs is a common oversight.

My view is that “complexity” is kicked around as if everyone held a shared understanding of the term. There are quite different types of complexity. For software, there is the complexity of a simple process created in Assembler but essentially impenetrable to a 20-something from a whiz-bang computer science school. There is the complexity of software built over time by attention deficit driven people who do not communicate, coordinate, or care what others are doing, will do, or have done. Toss in the complexity of indifferent, uninformed, or uninterested “management,” and you get an exciting environment in which to “fix up” software. The cherry on top of this confection is that quite a bit of software is assumed to be good. Ho ho ho.

The Real World: It Exists and Permeates

I liked this statement:

Technology that seemed straightforward refuses to cooperate, external competitors launch similar ideas, key partners back out, and internal business stakeholders focus more on the projects that include AI in their name. Things slow down, and as months turn into years, enthusiasm wanes. Then the snowball continues — key members leave, and new people join, each departure a slight shift in direction. New tech lead steps in, eager to leave their mark, steering the project further from its original course. At this point, nobody knows where the project is headed, and nobody wants to admit the project has failed. It’s a tough spot, especially when everyone’s playing it safe, avoiding the embarrassment or penalties of admitting failure.

What are the signals that trouble looms? A fumbled ball at the Google or the Apple car that isn’t can be blinking lights. Staff who go rogue on social media or find an ambulance chasing honed law firm can catch some individual’s attention.

The write up contains other helpful observations. Will people take heed? Are you kidding me? Excellence costs money, requires informed judgment, and expertise. Who has time for this with AI calendars, the demands of TikTok and Instagram, and hitting the local coffee shop?

Stephen E Arnold, March 19, 2024

A Single Google Gem for March 19, 2024

March 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I want to focus on what could be the star sapphire of Googledom. The story appeared on the estimable Murdoch confection Fox News. Its title? “Is Google Too Broken to Be Fixed? Investors Deeply Frustrated and Angry, Former Insider Warns”? The word choice in this Googley headline signals the alert reader that the Foxy folks have a juicy story to share. “Broken,” “Frustrated,” “Angry,” and “Warns” suggest that someone has identified some issues at the beloved Google.

! google gems

A Google gem. Thanks, MSFT Copilot Bing thing. How’s the staff’s security today?

The write up states:

A former Google executive [David Friedberg] revealed that investors are “deeply frustrated” that the scandal surrounding their Gemini artificial intelligence (AI) model is becoming a “real threat” to the tech company. Google has issued several apologies for Gemini after critics slammed the AI for creating “woke” content.

The Xoogler, in what seems to be tortured prose, allegedly said:

“The real threat to Google is more so, are they in a position to maintain their search monopoly or maintain the chunk of profits that drive the business under the threat of AI? Are they adapting? And less so about the anger around woke and DEI,” Friedberg explained. “Because most of the investors I spoke with aren’t angry about the woke, DEI search engine, they’re angry about the fact that such a blunder happened and that it indicates that Google may not be able to compete effectively and isn’t organized to compete effectively just from a consumer competitiveness perspective,” he continued.

The interesting comment in the write up (which is recycled podcast chatter) seems to be:

Google CEO Sundar Pichai promised the company was working “around the clock” to fix the AI model, calling the images generated “biased” and “completely unacceptable.”

Does the comment attributed to a Big Dog Microsoftie reflect the new perception of the Google. The Hindustan Times, which should have radar tuned to the actions, of certain executives with roots entwined in India reported:

Satya Nadella said that Google “should have been the default winner” of Big Tech’s AI race as the resources available to it are the maximum which would easily make it a frontrunner.

My interpretation of this statement is that Google had a chance to own the AI casino, roulette wheel, and the croupiers. Instead, Google’s senior management ran over the smart squirrel with the Paris demonstration of the fantastic Bard AI system, a series of me-too announcements, and the outputting of US historical scenes with people of color turning up in what I would call surprising places.

Then the PR parade of Google wizards explains the online advertising firm’s innovations in playing games, figuring out health stuff (shades of IBM Watson), and achieving quantum supremacy in everything. Well, everything except smart software. The predicament of the ad giant is illuminated with the burning of billions in market cap coincident with the wizards’ flubs.

Net net: That’s a gem. Google losing a game it allegedly owned. I am waiting for the next podcast about the Sundar & Prabhakar Comedy Tour.

Stephen E Arnold, March 19, 2024

Microsoft Decides to Work with CISPE on Cloudy Concerns

March 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Perhaps a billion and a half dollars in fines can make a difference to a big tech company after all. In what looks like a move to avoid more regulatory scrutiny, Yahoo Finance reports, “Microsoft in Talks to End Trade Body’s Cloud Computing Complaint.” The trade body here is CISPE, a group of firms that provide cloud services in Europe. Amazon is one of those, but 26 smaller companies are also members. The group asserts certain changes Microsoft made to its terms of service in October of 2022 have harmed Europe’s cloud computing ecosystem. How, exactly, is unclear. Writer Foo Yun Chee tells us:

“[CISPE] said it had received several complaints about Microsoft, including in relation to its product Azure, which it was assessing based on its standard procedures, but declined to comment further. Azure is Microsoft’s cloud computing platform. CISPE said the discussions were at an early stage and it was uncertain whether these would result in effective remedies but said ‘substantive progress must be achieved in the first quarter of 2024’. ‘We are supportive of a fast and effective resolution to these harms but reiterate that it is Microsoft which must end its unfair software licensing practices to deliver this outcome,’ said CISPE secretary general Francisco Mingorance. Microsoft, which notched up 1.6 billion euros ($1.7 billion) in EU antitrust fines in the previous decade, has in recent years changed its approach towards regulators to a more accommodative one.”

Just how accommodating with Microsoft will be remains to be seen.

Cynthia Murrell, March 19, 2024

Old Code, New Code: Can You Make It Work Again… Sort Of?

March 18, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Even hippy dippy super slick AI start ups have a technical debt problem. It is, in my opinion, no different from the “costs” imposed on outfits like JPMorgan Chase or (heaven help us) AMTRAK. Software which mostly works is subject to two environmental problems. First, the people who wrote the code or made it work that last time catastrophe struck (hello, AT&T, how are those pushed updates working for you now?) move on, quit, or whatever. Second, the technical options for remediating the problem are evolving (how are those security hot fixes working out, Microsoft?).

image

The helpful father asks an question the aspiring engineer cannot answer. Thus it was when the wizard was a child, and it is when the wizard is working on a modern engineering project. Buildings tip; aircraft lose doors and wheels. Software updates kill computers. Self-driving cars cannot. Thanks, MSFT Copilot. Did you get your model airplane to fly when you were a wee lad? I think I know the answer.

I thought about this problem of the cost of code remediating, fixing, redoing, upgrading or whatever term fast-talking sales engineers use in their Zooms and PowerPoints as I read “The High-Risk Refactoring.” The write up does a good job of explaining in a gentle way what happens when suits authorize making old code like new again. (The suits do not know the agonies of the original developers, but why should “history” intrude on a whiz bang GenX or GenY management type?

The article says:

it’s highly important to ensure the system works the same way after the swap with the new code. In that regard, immediately spotting when something breaks throughout the whole refactoring process is very helpful. No one wants to find that out in production.

No kidding.

In most cases, there are insufficient skilled people and money to create a new or revamped system, get it up and running in parallel for an appropriate period of time, identify the problems, remediate them, and then make the cut over. People buy cars this way, but that’s not how most organizations, regardless of size, “do” software. Okay, the take your car in, buy a new one, and drive off will not work in today’s business environment.

The write up focuses on what most organizations do; that is, write or fix new code and stick it into a system. There may or may not be resources for a staging server, but the result is the same. The old software has been “fixed” and the documentation is “sort of written” and people move on to other work or in the case of consulting engineering firms, just get replaced by a new, higher margin professional.

The write up takes a different approach and concludes with four suggestions or questions to ask. I quote:

“Refactor if things are getting too complicated, but  stop if can’t prove it works.

Accompany new features with refactoring for areas you foresee to be subject to a change, but copy-pasting is ok until patterns arise.

Be proactive in finding new ways to ensure refactoring predictability, but be conservative about the assumption QA will find all the bugs.

Move business logic out of busy components, but be brave enough to keep the legacy code intact if the only argument is “this code looks wrong”.

These are useful points. I would like to suggest some bright white lines for those who have to tackle an IRS-mainframe- or AT&T-billing system type of challenge as well as tweaking an artificial intelligence solution to respond to those wonky multi-ethnic images Google generated in order to allow the Sundar & Prabhakar Comedy Team to smile sheepishly and apologize again for lousy software.

Are you ready? Let’s go:

  1. Fixes add to the complexity of the code base. As time goes stumbling forward, the complexity of the software becomes greater. The cost of making sure the fix works and does not create exciting dependency behavior goes up. Thus, small fixes “cost” more, and these costs are tough to control.
  2. The safest fixes are “wrappers”; that is, no one in his or her right mind wants to change software written in 1978 for a machine no longer in production by the manufacturer. Therefore, new software is written to interact in a “safe” way with the original software. The new code “fixes up” the problem without screwing up what grandpa programmer wrote almost half a century ago. The problem is that “wrappers” tend to slow stuff down. The fix is to say one will optimize the system while one looks for a new project or job.
  3. The software used for “fixing” a problem is becoming the equivalent of repairing an aircraft component with Dawn laundry detergent. The “fix” is cheap, easy to use, and good enough. The software equivalent of this Dawn solution is that it will not stand the test of time. Instead of code crafted in good old COBOL or Assembler, we have some Fancy Dan tools which may fall out of favor in a matter of months, not decades.

Many projects result in better, faster, and cheaper. The reminder “Pick two” is helpful.

Net net: Fixing up lousy or flawed software is going to increase risks and costs. The question asked by bean counters is, “How much?” The answer is, “No one knows until the project is done … if ever.”

Stephen E Arnold, March 18, 2024

Worried about TikTok? Do Not Overlook CapCut

March 18, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I find the excitement about TikTok interesting. The US wants to play the reciprocity card; that is, China disallows US apps so the US can ban TikTok. How influential is TikTok? US elected officials learned first hand that TikTok users can get messages through to what is often a quite unresponsive cluster of elected officials. But let’s leave TikTok aside.

image

Thanks, MSFT Copilot. Good enough.

What do you know about the ByteDance cloud software CapCut? Ah, you have never heard of it. That’s not surprising because it is aimed at those who make videos for TikTok (big surprise) and other video platforms like YouTube.

CapCut has been gaining supporters like the happy-go-lucky people who published “how to” videos about CapCut on YouTube. On TikTok, CapCut short form videos have tallied billions of views. What makes it interesting to me is that it wants to phone home, store content in the “cloud”, and provide high-end tools to handle some tricky video situations like weird backgrounds on AI generated videos.

The product CapCut was named (I believe) JianYing or Viamaker (the story varies by source) which means nothing to me. The Google suggests its meanings could range from hard to paper cut out. I am not sure I buy these suggestions because Chinese is a linguistic slippery fish. Is that a question or a horse? In 2020, the app got a bit of shove into the world outside of the estimable Middle Kingdom.

Why is this important to me? Here are my reasons for creating this short post:

  • Based on my tests of the app, it has some of the same data hoovering functions of TikTok
  • The data of images and information about the users provides another source of potentially high value information to those with access to the information
  • Data from “casual” videos might be quite useful when the person making the video has landed a job in a US national laboratory or in one the high-tech playgrounds in Silicon Valley. Am I suggesting blackmail? Of course not, but a release of certain imagery might be an interesting test of the videographer’s self-esteem.

If you want to know more about CapCut, try these links:

  • Download (ideally to a burner phone or a PC specifically set up to test interesting software) at www.capcut.com
  • Read about the company CapCut in this 2023 Recorded Future write up
  • Learn about CapCut’s privacy issues in this Bloomberg story.

Net net: Clever stuff but who is paying attention. Parents? Regulators? Chinese intelligence operatives?

Stephen E Arnold, March 18, 2024

AI in Action: Price Fixing Play

March 18, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

If a tree falls in a forest and no one is there to hear it, does I make a sound? The obvious answer is yes, but the philosophers point out how can that be possible if there wasn’t anyone there to witness the event. The same argument can be made that price fixing isn’t illegal if it’s done by an AI algorithm. Smart people know it is a straw man’s fallacy and so does the Federal Trade Commission: “Price Fixing By Algorithm Is Still Price Fixing.”

The FTC and the Department of Justice agree that if an action is illegal for a human then it is illegal for an algorithm too. The official nomenclature is antitrust compliance. Both departments want to protect consumers against algorithmic collision, particularly in the housing market. They failed a joint legal brief that stresses the importance of a fair, competitive market. The brief stated that algorithms can’t be used to evade illegal price fixing agreements and it is still unlawful to share price fixing information even if the conspirators retain pricing discretion or cheat on the agreement.

Protecting consumers from unfair pricing practices is extremely important as inflation has soared. Rent has increased by 20% since 2020, especially for lower-income people. Nearly half of renters also pay more than 30% of their income in rent and utilities. The Department of Justice and the FTC also hold other industries accountable for using algorithms illegally:

“The housing industry isn’t alone in using potentially illegal collusive algorithms. The Department of Justice has previously secured a guilty plea related to the use of pricing algorithms to fix prices in online resales, and has an ongoing case against sharing of price-related and other sensitive information among meat processing competitors. Other private cases have been recently brought against hotels(link is external) and casinos(link is external).”

Hopefully the FTC and the Department of Justice retain their power to protect consumers. Inflation will continue to rise and consumers continue to suffer.

Whitney Grace, March 18, 2024

Harvard University: William James Continues Spinning in His Grave

March 15, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

William James, the brother of a novelist which caused my mind to wander just thinking about any one of his 20 novels, loved Harvard University. In a speech at Stanford University, he admitted his untoward affection. If one wanders by William’s grave in Cambridge Cemetery (daylight only, please), one can hear a sound similar to a giant sawmill blade emanating from the a modest tombstone. “What’s that horrific sound?” a by passer might ask. The answer: “William is spinning in his grave. It a bit like a perpetual motion machine now,” one elderly person says. “And it is getting louder.”

image

William is spinning in his grave because his beloved Harvard appears to foster making stuff up. Thanks, MSFT Copilot. Working on security today or just getting printers to work?

William is amping up his RPMs. Another distinguished Harvard expert, professor, shaper of the minds of young men and women and thems has been caught fabricating data. This is not the overt synthetic data shop at Stanford University’s Artificial Intelligence Lab and the commercial outfit Snorkel. Nope. This is just a faculty member who, by golly, wanted to be respected it seems.

The Chronicle of Higher Education (the immensely popular online information service consumed by thumb typers and swipers) published “Here’s the Unsealed Report Showing How Harvard Concluded That a Dishonesty Expert Committed Misconduct.” (Registration required because, you know, information about education is sensitive and users must be monitored.) The report allegedly required 1,300 pages. I did not read it. I get the drift: Another esteemed scholar just made stuff up. In my lingo, the individual shaped reality to support her / its vision of self. Reality was not delivering honor, praise, rewards, money, and freedom from teaching horrific undergraduate classes. Why not take the Excel macro to achievement: Invent and massage information. Who is going to know?

The write up says:

the committee wrote that “she does not provide any evidence of [research assistant] error that we find persuasive in explaining the major anomalies and discrepancies.” Over all, the committee determined “by a preponderance of the evidence” that Gino “significantly departed from accepted practices of the relevant research community and committed research misconduct intentionally, knowingly, or recklessly” for five alleged instances of misconduct across the four papers. The committee’s findings were unanimous, except for in one instance. For the 2012 paper about signing a form at the top, Gino was alleged to have falsified or fabricated the results for one study by removing or altering descriptions of the study procedures from drafts of the manuscript submitted for publication, thus misrepresenting the procedures in the final version. Gino acknowledged that there could have been an honest error on her part. One committee member felt that the “burden of proof” was not met while the two other members believed that research misconduct had, in fact, been committed.

Hey, William, let’s hook you up to a power test dynamometer so we can determine exactly how fast you are spinning in your chill, dank abode. Of course, if the data don’t reveal high-RPM spinning, someone at Harvard can be enlisted to touch up the data. Everyone seems to be doing from my vantage point in rural Kentucky.

Is there a way to harness the energy of professors who may cut corners and respected but deceased scholars to do something constructive? Oh, look. There’s a protest group. Let’s go ask them for some ideas. On second thought… let’s not.

Stephen E Arnold, March 15, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta