The Customer Is Not Right. The Customer Is the Problem!

August 7, 2024

dinosaur30a_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

The CrowdStrike misstep (more like a trivial event such as losing the cap to a Bic pen or misplacing an eraser) seems to be morphing into insights about customer problems. I pointed out that CrowdStrike in 2022 suggested it wanted to become a big enterprise player. The company has moved toward that goal, and it has succeeded in capturing considerable free marketing as well.

image

Two happy high-technology customers learn that they broke their system. The good news is that the savvy vendor will sell them a new one. Thanks, MSFT Copilot. Good enough.

The interesting failure of an estimated 8.5 million customers’ systems made CrowdStrike a household name. Among some airline passengers, creative people added more colorful language. Delta Airlines has retained a big time law firm. The idea is to sue CrowdStrike for a misstep that caused concession sales at many airports to go up. Even Panda Chinese looks quite tasty after hours spent in an airport choked with excited people, screaming babies, and stressed out over achieving business professionals.

Microsoft Claims Delta Airlines Declined Help in Upgrading Technology After Outage” reports that like CrowdStrike, Microsoft’s attorneys want to make quite clear that Delta Airlines is the problem. Like CrowdStrike, Microsoft tried repeatedly to offer a helping hand to the airline. The airline ignored that meritorious, timely action.

Like CrowdStrike, Delta is the problem, not CrowdStrike or Microsoft whose systems were blindsided by that trivial update issue. The write up reports:

Mark Cheffo, a Dechert partner [another big-time lawfirm] representing Microsoft, told Delta’s attorney in a letter that it was still trying to figure out how other airlines recovered faster than Delta, and accused the company of not updating its systems. “Our preliminary review suggests that Delta, unlike its competitors, apparently has not modernized its IT infrastructure, either for the benefit of its customers or for its pilots and flight attendants,” Cheffo wrote in the letter, NBC News reported. “It is rapidly becoming apparent that Delta likely refused Microsoft’s help because the IT system it was most having trouble restoring — its crew-tracking and scheduling system — was being serviced by other technology providers, such as IBM … and not Microsoft Windows," he added.

The language in the quoted passage, if accurate, is interesting. For instance, there is the comparison of Delta to other airlines which “recovered faster.” Delta was not able to recover faster. One can conclude that Delta’s slowness is the reason the airline was dead on the hot tarmac longer than more technically adept outfits. Among customers grounded by the CrowdStrike misstep, Delta was the problem. Microsoft systems, as outstanding as they are, wants to make darned sure that Delta’s allegations of corporate malfeasance goes nowhere fast oozes from this characterization and comparison.

Also, Microsoft’s big-time attorney has conducted a “preliminary review.” No in-depth study of fouling up the inner workings of Microsoft’s software is needed. The big-time lawyers have determined that “Delta … has not modernized its IT infrastructure.” Okay, that’s good. Attorneys are skillful evaluators of another firm’s technological infrastructure. I did not know big-time attorneys had this capability, but as a dinobaby, I try to learn something new every day.

Plus the quoted passed makes clear that Delta did not want help from either CrowdStrike or Microsoft. But the reason is clear: Delta Airlines relied on other firms like IBM. Imagine. IBM, the mainframe people, the former love buddy of Microsoft in the OS/2 days, and the creator of the TV game show phenomenon Watson.

As interesting as this assertion that Delta is not to blame for making some airports absolute delights during the misstep, it seems to me that CrowdStrike and Microsoft do not want to be in court and having to explain the global impact of misplacing that ballpoint pen cap.

The other interesting facet of the approach is the idea that the best defense is a good offense. I find the approach somewhat amusing. The customer, not the people licensing software, is responsible for its problems. These vendors made an effort to help. The customer who screwed up their own Rube Goldberg machine, did not accept these generous offers for help. Therefore, the customer caused the financial downturn, relying on outfits like the laughable IBM.

Several observations:

  1. The “customer is at fault” is not surprising. End user licensing agreements protect the software developer, not the outfit who pays to use the software.
  2. For CrowdStrike and Microsoft, a loss in court to Delta Airlines will stimulate other inept customers to seek redress from these outstanding commercial enterprises. Delta’s litigation must be stopped and quickly using money and legal methods.
  3. None of the yip-yap about “fault” pays much attention to the people who were directly affected by the trivial misstep. Customers, regardless of the position in the food chain of revenue, are the problem. The vendors are innocent, and they have rights too just like a person.

For anyone looking for a new legal matter to follow, the CrowdStrike Microsoft versus Delta Airlines may be a replacement for assorted murders, sniping among politicians, and disputes about “get out of jail free cards.” The vloggers and the poohbahs have years of interactions to observe and analyze. Great stuff. I like the customer is the problem twist too.

Oh, I must keep in mind that I am at fault when a high-technology outfit delivers low-technology.

Stephen E Arnold, August 7, 2024

One Legal Stab at CrowdStrike Liability

July 30, 2024

dinosaur30a_thumb_thumb_thumb_thumb_This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read “CrowdStrike Will Be Liable for Damages in France, Based on the OVH Precedent.” OVH is a provider of hosting and what I call “enabling services” to organizations in France, Europe, and other countries. The write up focuses on a modest problem OVH experienced in 2021. A fire consumed four of OVH’s data centers. Needless to say the customers of one of the largest online services providers in Europe were not too happy for two reasons: Backups were not available and the affected organizations were knocked offline.

image

Two astronauts look down at earth from the soon to be decommissioned space station. The lights and power on earth just flicked off. Thanks, Microsoft Copilot. No security meetings today?

The article focuses on the French courts’ decision that OVH was liable for damages. A number of details about the legal logic appear in the write up. For those of you who still watch Perry Mason reruns on Sling, please, navigate to the cited article for the details. I boiled the OVH tale down to a single dot point from the excellent article:

The court ruled the OVH backup service was not operated to a reasonable standard and failed at its purpose.

This means that in France and probably the European Union those technology savvy CrowdStrike wizards will be writing checks. The firm’s lawyers will get big checks for a number of years. Then the falconers of cyber threats will be scratching out checks to the customers and probably some of the well-heeled downstream airport lounge sleepers, the patients’ families died because surgeries could not be performed, and a kettle of seething government agencies whose emergency call services were dead.

The write concludes with this statement:

Customers operating in regulated industries like healthcare, finance, aerospace, transportation, are actually required to test and stage and track changes. CrowdStrike claims to have a dozen certifications and standards which require them to follow particular development practices and carry out various level of testing, but they clearly did not. The simple fact that CrowdStrike does not do any of that and actively refuses to, puts them in breach of compliance, which puts customers themselves in breach of compliance by using CrowdStrike. All together, there may be sufficient grounds to unilaterally terminate any CrowdStrike contracts for any customer who wishes to.

The key phrase is “in breach of compliance”. That’s going to be an interesting bit of lingo for lawyers involved in the dead Falcon affair to sort out.

Several observations:

  1. Will someone in the post-Falcon mess raise the question, “Could this be a recipe for a bad actor to emulate?” Could friends of one of the founder who has some ties to Russia be asked questions?
  2. What about that outstanding security of the Microsoft servers? How will the smart software outfit fixated on putting ads for a browser in an operating system respond? Those blue screens are not what I associate with my Apple Mini servers. I think our Linux boxes display a somewhat ominous black screen. Blue is who?
  3. Will this incident be shoved around until absolutely no one knows who signed off on the code modules which contributed to this somewhat interesting global event? My hunch it could be a person working as a contractor from a yurt somewhere northeast of Armenia. What’s your best guess?

Net net: It is definite that a cyber attack aimed at the heart of Microsoft’s software can create global outages. How many computer science students in Bulgaria are thinking about this issue? Will bad actors’ technology wizards rethink what can be done with a simple pushed update?

Stephen E Arnold, July 30, 2024

Google AdWords in Russia?

July 23, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

I have been working on a project requiring me to examine a handful of Web sites hosted in Russia, in the Russian language, and tailored for people residing in Russia and its affiliated countries. I came away today with a screenshot from the site for IT Cube Studio. The outfit creates Web sites and provides advertising services. Here’s a screenshot in Russian which advertises the firm’s ability to place Google AdWords for a Russian client:

image

If you don’t read Russian, here’s the translation of the text. I used Google Translate which seems to do an okay job with the language pair Russian to English. The ad says:

Contextual advertising. Potential customers and buyers on your website a week after the start of work.

The word

image

is the Russian spelling of Yandex. The Google word is “Google.”

I thought there were sanctions. In fact, I navigated to Google and entered this query “google AdWords Russia.” What did Google tell me on July 22, 2024, 503 pm US Eastern time?

Here’s the Google results page:

image

The screenshot is difficult to read, but let me highlight the answer to my question about Google’s selling AdWords in Russia.

There is a March 10, 2022, update which says:

Mar 10, 2022 — As part of our recent suspension of ads in Russia, we will also pause ads on Google properties and networks globally for advertisers based in [Russia] …

Plus there is one of those “smart” answers which says:

People also ask

Does Google Ads work in Russia?

Due to the ongoing war in Ukraine, we will be temporarily pausing Google ads from serving to users located in Russia. [Emphasis in the original Google results page display}

I know my Russian is terrible, but I am probably slightly better equipped to read and understand English. The Google results seem to say, “Hey, we don’t sell AdWords in Russia.”

I wonder if the company IT Cube Studio is just doing some marketing razzle dazzle. Is it possible that Google is saying one thing and doing another in Russia? I recall that Google said it wasn’t WiFi sniffing in Germany a number of years ago. I believe that Google was surprised when the WiFi sniffing was documented and disclosed.

I find these big company questions difficult to answer. I am certainly not a Google-grade intellect. I am a dinobaby. And I am inclined to believe that there is a really simple explanation or a very, very sincere apology if the IT Cube Studio outfit is selling Google AdWords when sanctions are in place.

If anyone of the two or three people who follow my Web log knows the answer to my questions, please, let me know. You can write me at benkent2020 at yahoo dot com. For now, I find this interesting. The Google would not violate sanctions, would it?

Stephen E Arnold, July 23, 2024

What Will the AT&T Executives Serve Their Lawyers at the Security Breach Debrief?

July 15, 2024

dinosaur30a_thumb_thumb_thumb_thumb_[1]_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

On the flight back to my digital redoubt in rural Kentucky, I had the thrill of sitting behind a couple of telecom types who were laughing at the pickle AT&T has plopped on top of what I think of a Judge Green slushee. Do lime slushees and dill pickles go together? For my tastes, nope. Judge Green wanted to de-monopolize the Ma Bell I knew and loved. (Yes, I cashed some Ma Bell checks and I had a Young Pioneers hat.)

We are back to what amounts a Ma Bell trifecta: AT&T (the new version which wears spurs and chaps), Verizon (everyone’s favorite throw back carrier), and the new T-Mobile (bite those customer pocketbooks as if they were bratwursts mit sauerkraut). Each of these outfits is interesting. But at the moment, AT&T is in the spotlight.

Data of Nearly All AT&T Customers Downloaded to a Third-Party Platform in a 2022 Security Breach” dances around a modest cyber misstep at what is now a quite old and frail Ma Bell. Imagine the good old days before the Judge Green decision to create Baby Bells. Security breaches were possible, but it was quite tough to get the customer data. Attacks were limited to those with the knowledge (somewhat tough to obtain), the tools (3B series computers and lots of mainframes), and access to network connections. Technology has advanced. Consequently competition means that no one makes money via security. Security is better at old-school monopolies because money can be spent without worrying about revenue. As one AT&T executive said to my boss at a blue-chip consulting company, “You guys charge so much we will have to get another railroad car filled with quarters to pay your bill.” Ho ho ho — except the fellow was not joking. At the pre-Judge Green AT&T, spending money on security was definitely not an issue. Today? Seems to be different.

A more pointed discussion of Ma Bell’s breaking her hip again appears in “AT&T Breach Leaked Call and Text Records from Nearly All Wireless Customers” states:

AT&T revealed Friday morning (July 12, 2024) that a cybersecurity attack had exposed call records and texts from “nearly all” of the carrier’s cellular customers (including people on mobile virtual network operators, or MVNOs, that use AT&T’s network, like Cricket, Boost Mobile, and Consumer Cellular). The breach contains data from between May 1st, 2022, and October 31st, 2022, in addition to records from a “very small number” of customers on January 2nd, 2023.

The “problem” if I understand the reference to Snowflake. Is AT&T suggesting that Snowflake is responsible for the breach? Big outfits like to identify the source of the problem. If Snowflake made the misstep, isn’t it the responsibility of AT&T’s cyber unit to make sure that the security was as good as or better than the security implemented before the Judge Green break up? I think AT&T, like other big companies, wants to find a way to shift blame, not say, “We put the pickle in the lime slushee.”

My posture toward two year old security issues is, “What’s the point of covering up a loss of ‘nearly all’ customers’ data?” I know the answer: Optics and the share price.

As a person who owned a Young Pioneers’ hat, I am truly disappointed in the company. The Regional Managers for whom I worked as a contractor had security on the list of top priorities from day one. Whether we were fooling around with a Western Electric data service or the research charge back system prior to the break up, security was not someone else’s problem.

Today it appears that AT&T has made some decisions which are now perched on the top officer’s head. Security problems  are, therefore, tough to miss. Boeing loses doors and wheels from aircraft. Microsoft tantalizes bad actors with insecure systems. AT&T outsources high value data and then moves more slowly than the last remaining turtle in the mine run off pond near my home in Harrod’s Creek.

Maybe big is not as wonderful as some expect the idea to be? Responsibility for one’s decisions and an ethical compass are not cyber tools, but both notions are missing in some big company operations. Will the after-action team guzzle lime slushees with pickles on top?

Stephen E Arnold, July 15, 2024

NSO Group Determines Public Officials Are Legitimate Targets

July 12, 2024

Well, that is a point worth making if one is the poster child of the specialized software industry.

NSO Group, makers of the infamous Pegasus spyware, makes a bold claim in a recent court filing: “Government and Military Officials Fair Targets of Pegasus Spyware in All Cases, NSO Group Argues,” reports cybersecurity news site The Record. The case at hand is Pegasus’ alleged exploitation of a WhatsApp vulnerability back in 2019. Reporter Suzanne Smalley cites former United Nations official David Kaye, who oversaw the right to free expression at that time. Smalley writes:

“Friday’s filing seems to suggest a broader purpose for Pegasus, Kaye said, pointing to NSO’s explanation that the technology can be used on ‘persons who, by virtue of their positions in government or military organizations, are the subject of legitimate intelligence investigations.’ ‘This appears to be a much more extensive claim than made in 2019, since it suggests that certain persons are legitimate targets of Pegasus without a link to the purpose for the spyware’s use,’ said Kaye, who was the U.N.’s special rapporteur on freedom of opinion and expression from 2014 to 2020. … The Israeli company’s statement comes as digital forensic researchers are increasingly finding Pegasus infections on phones belonging to activists, opposition politicians and journalists in a host of countries worldwide. NSO Group says it only sells Pegasus to governments, but the frequent and years-long discoveries of the surveillance technology on civil society phones have sparked a public uproar and led the U.S. government to crack down on the company and commercial spyware manufacturers in general.”

See the article for several examples of suspected targets around the world. We understand both the outrage and the crack down. However, publicly arguing about the targets of spyware may have unintended consequences. Now everyone knows about mobile phone data exfiltration and how that information can be used to great effect.

As for the WhatsApp court case, it is proceeding at the sluggish speed of justice. In March 2024, a California federal judge ordered NSO Group to turn over its secret spyware code. What will be the verdict? When will it be handed down? And what about the firm’s senior managers?

Cynthia Murrell, July 12, 2024

Falling Apples: So Many to Harvest and Sell to Pay the EU

June 25, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

What’s goes up seems to come down. Apple is peeling back on the weird headset gizmo. The company’s AI response — despite the thrills Apple Intelligence produced in some acolytes — is “to be” AI or vaporware. China dependence remains a sticky wicket. And if the information in “Apple Has Very Serious Issues Under Sweeping EU Digital Rules, Competition Chief Says,” the happy giant in Cupertino will be writing some Jupiter-sized checks. Imagine. Pesky Europeans are asserting that Apple has a monopoly and has been acting less like Johnny Appleseed and more like Andrew Carnegie.

image

A powerful force causes Tim Apple to wonder why so many objects are falling on his head. Thanks, MSFT Copilot. Good enough.

The write up says:

… regulators are preparing charges against the iPhone maker. In March [2024], the European Commission, the EU’s executive arm, opened a probe into Apple, Alphabet and Meta, under the sweeping Digital Markets Act tech legislation that became applicable this year. The investigation featured several concerns about Apple, including whether the tech giant is blocking businesses from telling their users about cheaper options for products or about subscriptions outside of the App Store.

Would Apple, the flag bearer for almost-impossible to repaid products and software that just won’t charge laptop batteries no matter what the user needs to do prior to a long airplane flight prevent the free flow of information?

The EU nit pickers believe that Apple’s principles and policies are a “serious issue.”

How much money is possibly involved if the EU finds Apple a — pardon the pun — a bad apple in a barrel of rotten US high technology companies? The write up says:

If it is found in breach of Digital Markets Act rules, Apple could face fines of up to 10% of the company’s total worldwide annual turnover.

For FY2023, Apple captured about $380 billion, this works out to a potential payday for the EU of about US$ 38 billion and change.

Speaking of change, will a big fine cause those Apples to levitate? Nope.

Stephen E Arnold, June 25, 2024

Meta Case Against Intelware Vendor Voyager Lags to Go Forward

June 21, 2024

Another clever intelware play gets trapped and now moves to litigation. Meta asserts that when Voyager Labs scraped data on over 600,000 Facebook users, it violated its contract. Furthermore, it charges, the scraping violated anti-hacking laws. While Voyager insists the case should be summarily dismissed, U.S. District Court Judge Araceli Martinez-Olguin disagrees. MediaDailyNews reports, “Meta Can Proceed With Claims that Voyager Labs Scraped Users’ Data.” Writer Wendy Davis explains:

“Voyager argued the complaint should be dismissed at an early stage for several reasons. Among others, Voyager said the allegations regarding Facebook’s terms of service were too vague. Meta’s complaint ‘refers to a catchall category of contracts … but then says nothing more about those alleged contracts, their terms, when they are supposed to have been executed, or why they allegedly bind Voyager UK today,’ Voyager argued to Martinez-Olguin in a motion filed in February. The company also said California courts lacked jurisdiction to decide whether the company violated federal or state anti-hacking laws. Martinez-Olguin rejected all of Voyager’s arguments on Thursday. She wrote that while Meta’s complaint could have set out the company’s terms of service ‘with more clarity,’ the allegations sufficiently informed Voyager of the basis for Meta’s claim.”

This battle began in January 2023 when Meta first filed the complaint. Now it can move forward. How long before the languid wheels of justice turn out a final ruling? A long time we wager.

Cynthia Murrell, June 21, 2024

Hallucinations in the Courtroom: AI Legal Tools Add to Normal Wackiness

June 17, 2024

Law offices are eager to lighten their humans’ workload with generative AI. Perhaps too eager. Stanford University’s HAI reports, “AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries.” Close enough for horseshoes, but for justice? And that statistic is with improved, law-specific software. We learn:

“In one highly-publicized case, a New York lawyer faced sanctions for citing ChatGPT-invented fictional cases in a legal brief; many similar cases have since been reported. And our previous study of general-purpose chatbots found that they hallucinated between 58% and 82% of the time on legal queries, highlighting the risks of incorporating AI into legal practice. In his 2023 annual report on the judiciary, Chief Justice Roberts took note and warned lawyers of hallucinations.”

But that was before tailor-made retrieval-augmented generation tools. The article continues:

“Across all areas of industry, retrieval-augmented generation (RAG) is seen and promoted as the solution for reducing hallucinations in domain-specific contexts. Relying on RAG, leading legal research services have released AI-powered legal research products that they claim ‘avoid’ hallucinations and guarantee ‘hallucination-free’ legal citations. RAG systems promise to deliver more accurate and trustworthy legal information by integrating a language model with a database of legal documents. Yet providers have not provided hard evidence for such claims or even precisely defined ‘hallucination,’ making it difficult to assess their real-world reliability.”

So the Stanford team tested three of the RAG systems for themselves, Lexis+ AI from LexisNexis and Westlaw AI-Assisted Research & Ask Practical Law AI from Thomson Reuters. The authors note they are not singling out LexisNexis or Thomson Reuters for opprobrium. On the contrary, these tools are less opaque than their competition and so more easily examined. They found that these systems are more accurate than the general-purpose models like GPT-4. However, the authors write:

“But even these bespoke legal AI tools still hallucinate an alarming amount of the time: the Lexis+ AI and Ask Practical Law AI systems produced incorrect information more than 17% of the time, while Westlaw’s AI-Assisted Research hallucinated more than 34% of the time.”

These hallucinations come in two flavors. Many responses are flat out wrong. Others are misgrounded: they are correct about the law but cite irrelevant sources. The authors stress this second type of error is more dangerous than it may seem, for it may lure users into a false sense of security about the tool’s accuracy.

The post examines challenges particular to RAG-based legal AI systems and discusses responsible, transparent ways to use them, if one must. In short, it recommends public benchmarking and rigorous evaluations. Will law firms listen?

Cynthia Murrell, June 17, 2024

Will the Judge Notice? Will the Clients If Convicted?

June 12, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Law offices are eager to lighten their humans’ workload with generative AI. Perhaps too eager. Stanford University’s HAI reports, “AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries.” Close enough for horseshoes, but for justice? And that statistic is with improved, law-specific software. We learn:

“In one highly-publicized case, a New York lawyer faced sanctions for citing ChatGPT-invented fictional cases in a legal brief; many similar cases have since been reported. And our previous study of general-purpose chatbots found that they hallucinated between 58% and 82% of the time on legal queries, highlighting the risks of incorporating AI into legal practice. In his 2023 annual report on the judiciary, Chief Justice Roberts took note and warned lawyers of hallucinations.”

But that was before tailor-made retrieval-augmented generation tools. The article continues:

“Across all areas of industry, retrieval-augmented generation (RAG) is seen and promoted as the solution for reducing hallucinations in domain-specific contexts. Relying on RAG, leading legal research services have released AI-powered legal research products that they claim ‘avoid’ hallucinations and guarantee ‘hallucination-free’ legal citations. RAG systems promise to deliver more accurate and trustworthy legal information by integrating a language model with a database of legal documents. Yet providers have not provided hard evidence for such claims or even precisely defined ‘hallucination,’ making it difficult to assess their real-world reliability.”

So the Stanford team tested three of the RAG systems for themselves, Lexis+ AI from LexisNexis and Westlaw AI-Assisted Research & Ask Practical Law AI from Thomson Reuters. The authors note they are not singling out LexisNexis or Thomson Reuters for opprobrium. On the contrary, these tools are less opaque than their competition and so more easily examined. They found that these systems are more accurate than the general-purpose models like GPT-4. However, the authors write:

“But even these bespoke legal AI tools still hallucinate an alarming amount of the time: the Lexis+ AI and Ask Practical Law AI systems produced incorrect information more than 17% of the time, while Westlaw’s AI-Assisted Research hallucinated more than 34% of the time.”

These hallucinations come in two flavors. Many responses are flat out wrong. Others are misgrounded: they are correct about the law but cite irrelevant sources. The authors stress this second type of error is more dangerous than it may seem, for it may lure users into a false sense of security about the tool’s accuracy.

The post examines challenges particular to RAG-based legal AI systems and discusses responsible, transparent ways to use them, if one must. In short, it recommends public benchmarking and rigorous evaluations. Will law firms listen?

Cynthia Murrell, June 12, 2024

Price Fixing Is Price Fixing with or without AI

June 3, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Small time landlords, such as mom and pops who invested in property for retirement, shouldn’t be compared to large, corporate landlords. The corporate landlords, however, give them all a bad name. Why? Because of actions like price fixing. ProPublicia details how politicians are fighting against the bad act: “We Found That Landlords Could Be Using Algorithms To Fix Rent Prices. Now Lawmakers Want To make The Practice Illegal.”

RealPage sells software programmed with AI algorithm that collect rent data and recommends how much landlords should charge. Lawmakers want to ban AI-base price fixing so landlords won’t become cartels that coordinate pricing. RealPage and its allies defend the software while lawmakers introduced a bill to ban it.

The FTC also states that AI-based real estate software has problems: “Price Fixing By Algorithm Is Still Price Fixing.” The FTC isn’t against technology. They’re against technology being used as a tool to cheat consumers:

“Meanwhile, landlords increasingly use algorithms to determine their prices, with landlords reportedly using software like “RENTMaximizer” and similar products to determine rents for tens of millions(link is external) of apartments across the country. Efforts to fight collusion are even more critical given private equity-backed consolidation(link is external) among landlords and property management companies. The considerable leverage these firms already have over their renters is only exacerbated by potential algorithmic price collusion. Algorithms that recommend prices to numerous competing landlords threaten to remove renters’ ability to vote with their feet and comparison-shop for the best apartment deal around.”

This is an example of how to use AI for evil. The problem isn’t the tool it’s the humans using it.

Whitney Grace, June 3, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta