ChatBots: For the Faithful Factually?

May 19, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I spotted Twitch’s AI-fueled ask_Jesus. You can take a look at this link. The idea is that smart software responds in a way a cherished figure would. If you watch the questions posed by registered Twitchers, you can wait a moment and the ai Jesus will answer the question. Rather than paraphrase or quote the smart software, I suggest you navigate to this Bezos bulldozer property and check out the “service.”

I mention the Amazon offering because I noted another smart religion robot write up called “India’s Religious AI Chatbots Are Speaking in the Voice of God and Condoning Violence.” The article touches upon several themes which I include in my 2023 lecture series about the shadow Web and misinformation from bad actors and wonky smart software.

This Rest of World article reports:

In January 2023, when ChatGPT was setting new growth records, Bengaluru-based software engineer Sukuru Sai Vineet launched GitaGPT. The chatbot, powered by GPT-3 technology, provides answers based on the Bhagavad Gita, a 700-verse Hindu scripture. GitaGPT mimics the Hindu god Krishna’s tone — the search box reads, “What troubles you, my child?”

The trope is for the “user” to input a question and the smart software outputs a response. But there is not just Sukuru’s version. There are allegedly five GitaGPTs available “with more on the way.”

The article includes a factoid in a quote allegedly from a human AI researcher; to wit:

Religion is the single largest business in India.

I did not know this. I thought it was outsourced work. product. Live and learn.

Is there some risk with religious chatbots? The write up states:

Religious chatbots have the potential to be helpful, by demystifying books like the Bhagavad Gita and making religious texts more accessible, Bindra said. But they could also be used by a few to further their own political or social interests, he noted. And, as with all AI, these chatbots already display certain political biases. [The Bindra is Jaspreet Bindra, AI researcher and author of The Tech Whisperer]

I don’t want to speculate what the impact of a religious chatbot might be if the outputs were tweaked for political or monetary purposes.

I will leave that to you.

Stephen E Arnold, May 19, 2023

The Ebb and More Ebby of Technology Full Time Jobs

May 19, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Do recent layoffs herald a sea change for the tech field? The Pragmatic Engineer and blogger Gergely Orosz examines “What Big Tech Layoffs Suggest for the Industry.” Probably not much, we think. Gergely penned his reflections just after Microsoft axed 10,000 jobs in January. Soon after Google followed suit, cutting 12,000 positions. Gergely appended a note stating those cuts strengthen his case that layoffs are significant. He writes:

“The layoffs at Microsoft suggest that in 2023, the tech industry may stall growth-wise. By cutting 5% of staff, Microsoft reduces its headcount from 221,000 to around 211,000. We can expect that by the middle of this year, the company’s headcount will increase, but only modestly, and still be below the 221,000 figure it was at last July. … Microsoft’s layoffs worry me precisely because the company has a very good track record of predicting how the business will grow or shrink.”

Okay, so Microsoft and other “nimble” tech companies are responding to market forces. That is what businesses do. Gergely himself notes:

“It’s certain we’ll see a correction of 2021-22’s hiring frenzy and it’s a given that Big Tech will hire much less this year than in 2022, while the question remains whether other large tech companies will follow suit and announce layoffs in the coming months.”

Well, yes, they did. Nevertheless tech workers, especially developers, remain in high demand compared to other fields. And when the proverbial stars align, hiring is sure to surge again as it did a couple years ago. Until the next correction. And so on.

Cynthia Murrell, May 19, 2023

Time at Work: Work? Who Has Time?

May 18, 2023

I recall data from IDC years ago which suggested or asserted or just made up the following:

knowledge workers spend more than one day each week looking for information.

Other mid tier consulting firms jumped on the bandwagon. Examples include:

  • McKinsey (yep, the outfit eager to replace human MBAs with digital doppelgängers says is is 9.3 hours a week
  • A principal analyst offers up 2.5 hours per day or 12.5 hours per week searching for information

Now let’s toss in a fresh number. The Rupert Murdoch Wall Street Journal asserts “Workers Now Spend Two Full Days a Week on Email and in Meetings.” I assume this includes legal preparation for the voting machine hoo hah.

What do these numbers suggest when workers are getting RIFed and college graduates are wandering in the wilderness hoping like a blind squirrel that an acorn will trip them?

With meetings, email, and hunting for information, who has time for work? Toss in some work from home flexibility and the result is… why nothing seems to work. Whether it is locating information in an ad supported network, browsing Twitter without logging in, or making “contacts” on LinkedIn — the work part of work is particularly slippery.

Microsoft needs a year to fix a security issue. Google is — any day now — rolling out smart software in most of its products except in the European Union due to some skepticism about the disconnect between Googley words and Googley actions. Cyber security firms are deploying proactive systems as the individual cyber security developers work overtime to deal with new threats.

I am surprised when something works; for example, a Southwest flight takes off and lands mostly on time, an Amazon package arrives the next day as promised, and my Kia is not stolen due to engineering that causes automobile insurance companies to let loose a flight of legal eagles.

Net net: Not too many people work. Quite a few say they work and some are stressed about their work. But work? Who has time? The purpose of work is to not work.

Stephen E Arnold, May 18, 2023

Neeva: Is This Google Killer on the Run?

May 18, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Sometimes I think it is 2007 doing the déjà vu dance. I read “Report: Snowflake Is in Advanced Talks to Acquire Search Startup Neeva.” Founded by Xooglers, Neeva was positioned to revolutionize search and generate subscription revenue. Along the highway to the pot of gold, Neeva would deliver on point results. How did that pay for search model work out?

According to the article:

Snowflake Inc., the cloud-based data warehouse provider, is reportedly in advanced talks to acquire a search startup called Neeva Inc. that was founded by former Google LLC advertising executive Sridhar Ramaswamy.

Like every other content processing company I bump into, Neeva was doing smart software. Combine the relevance angle with generative AI and what do you get? A start up that is going to be acquired by a firm with some interesting ideas about how to use search and retrieval to make life better.

Are there other search outfits with a similar business model? Sure, Kagi comes to mind. I used to keep track of start ups which had technology that would provide relevant results to users and a big payday to the investors. Do these names ring a bell?

Cluuz
Deepset
Glean
Kyndi
Siderian
Umiboza

If the Snowflake Neeva deal comes to fruition, will it follow the trajectory of IBM Vivisimo. Vivisimo disappeared as an entity and morphed into a big data component. No problem. But Vivisimo was a metasearch and on-the-fly tagging system. Will the tie up be similar to the Microsoft acquisition of Fast Search & Transfer. Fast still lives but I don’t know too many Softies who know about the backstory. Then there is the HP Autonomy deal. The acquisition is still playing out in the legal eagle sauna.

Few care about the nuances of search and retrieval. Those seemingly irrelevant details can have interesting consequences. Some are okay like the Dassault Exalead deal. Others? Less okay.

Stephen E Arnold, May 18, 2023

Harvard and a Web Archive Tool

May 18, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The Library of Congress has dropped the ball and the Internet Archive may soon be shut down. So it is Harvard to the rescue. At least until people sue the institution. The university’s Library Innovation Lab describes its efforts in, “Witnessing the Web is Hard: Why and How We Built the Scoop Web Archiving Capture Engine.”

“Our decade of experience running Perma.cc has given our team a vantage point to identify emerging challenges in witnessing the web that we believe extend well beyond our core mission of preserving citations in the legal record. In an effort to expand the utility of our own service and contribute to the wider array of core tools in the web archiving community, we’ve been working on a handful of Perma Tools. In this blog post, we’ll go over the driving principles and architectural decisions we’ve made while designing the first major release from this series: Scoop, a high-fidelity, browser-based, single-page web archiving capture engine for witnessing the web. As with many of these tools, Scoop is built for general use but represents our particular stance, cultivated while working with legal scholars, US courts, and journalists to preserve their citations. Namely, we prioritize their needs for specificity, accuracy, and security. These are qualities we believe are important to a wide range of people interested in standing up their own web archiving system. As such, Scoop is an open-source project which can be deployed as a standalone building block, hopefully lowering a barrier to entry for web archiving.”

At Scoop’s core is its “no-alteration principle” which, as the name implies, is a commitment to recording HTTP exchanges with no variations. The write-up gives some technical details on how the capture engine achieves that standard. Aside from that bedrock doctrine, though, Scoop allows users to customize it to meet their unique web-witnessing needs. Attachments are optional and users can configure each element of the capture process, like time or size limits. Another pair of important features is the built-in provenance summary, including preservation of SSL certificates, and authenticity assertion through support for the Web Archive Collection Zipped (WACZ) file format and the WACZ Signing and Verification specification. Interested readers should see the article for details on how to start using Scoop. You might want to hurry, before publishers jump in with their inevitable litigation push.

Cynthia Murrell, May 18, 2023

Okay, Google, How Are Your Fancy Math Recommendation Procedures Working? Just Great, You Say

May 17, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I have no idea what the Tech Transparency Project is. I am not interested in running a Google query for illumination. I don’t want to read jibber jabber from You.com or ChatGPT. In short, I am the ideal dino baby: Set in his ways, disinterested, and skeptical of information from an outfit which wants “transparency” in our shadow age.

I read “YouTube Leads Young Gamers to Videos of Guns, School Shootings.” For the moment, let’s assume that the Transparency folks are absolutely 100 percent off the mark. Google YouTube’s algorithms are humming along with 99.999999 (six sigma land) accuracy. Furthermore, let’s assume that the assertions in the article that the Google YouTube ad machine is providing people with links known to have direct relevance to a user’s YouTube viewing habits.

What’s this mean?

It means that Google is doing a bang up job of protecting young people, impressionable minds, and those who stumble into a forest of probabilities from “bad stuff.” The Transparency Project has selected outlier data and is not understanding the brilliant and precise methods of the Google algorithm wizards. Since people at the Transparency Project do not (I shall assume) work at Google, how can these non-Googlers fathom the subtle functioning of the Google mechanisms. Remember the old chestnut about people who thought cargo planes were a manifestation of God. Well, cargo cult worshippers need to accept the Google reality.

Let’s take a different viewpoint. Google is a pretty careless outfit. Multiple products and internal teams spat with one another over the Foosball table. Power struggles erupt in the stratospheric intellectual heights of Google carpetland and Google Labs. Wizards get promoted and great leaders who live far, far away become the one with the passkey to the smart software control room. Lesser wizards follow instructions, and the result may be what the Tech Transparency write up is describing — mere aberrations, tiny shavings of infinitesimals which could add up to something, or a glitch in a threshold setting caused by a surge of energy released when a Googler learns about a new ChatGPT application.

5 17 professor explaining

A researcher explaining how popular online video services can shape young minds. As Charles Colson observed, “Once you have them by the [unmentionables], their hearts and minds will follow.” True or false when it comes to pumping video information into immature minds of those seven to 14 years old? False, of course. Balderdash. Anyone suggesting such psychological operations is unfit to express an opinion. That sounds reasonable, right? Art happily generated by the tireless servant of creators — MidJourney, of course.

The write up states:

  • YouTube recommended hundreds of videos about guns and gun violence to accounts for boys interested in video games, according to a new study.
  • Some of the recommended videos gave instructions on how to convert guns into automatic weapons or depicted school shootings.
  • The gamer accounts that watched the YouTube-recommended videos got served a much higher volume of gun- and shooting-related content.
  • Many of the videos violated YouTube’s own policies on firearms, violence, and child safety, and YouTube took no apparent steps to age-restrict them.

And what supports these assertions which fly in the face of Googzilla’s assertions about risk, filtering, concern for youth, yada yada yada?

Let me present one statement from the cited article:

The study found YouTube recommending numerous other weapons-related videos to minors that violated the platform’s policies. For example, YouTube’s algorithm pushed a video titled “Mag-Fed 20MM Rifle with Suppressor” to the 14-year-old who watched recommended content. The description on the 24-second video, which was uploaded 16 years ago and has 4.8 million views, names the rifle and suppressor and links to a website selling them. That’s a clear violation of YouTube’s firearms policy, which does not allow content that includes “Links in the title or description of your video to sites where firearms or the accessories noted above are sold.”

What’s YouTube doing?

In my opinion, here’s the goal:

  • Generate clicks
  • Push content which may attract ads from companies looking to reach a specific demographic
  • Ignore the suits-in-carpetland in order to get a bonus, promoted, or a better job.

The culprit is, from my point of view, the disconnect between Google’s incentive plans for employees and the hand waving baloney in its public statements and footnote heavy PR like ““Ethical and Social Risks of Harm from Language Models.”

If you are wearing Google glasses, you may want to check out the company with a couple of other people who are scrutinizing the disconnect between what Google says and what Google does.

So which is correct? The Google is doing God, oh, sorry, doing good. Or, the Google is playing with kiddie attention to further its own agenda?

A suggestion for the researchers: Capture the pre-roll ads, the mid-roll ads, and the end-roll ads. Isn’t there data in those observations?

Stephen E Arnold, May 17, 2023

Kiddie Research: Some Guidelines

May 17, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The practice of performing market research on children will not go away any time soon. It is absolutely vital, after all, that companies be able to target our youth with pinpoint accuracy. In the article “A Guide on Conducting Better Market and User Research with Kids,” Meghan Skapyak of the UX Collective shares some best practices. Apparently these tips can help companies enthrall the most young people while protecting individual study participants. An interesting dichotomy. She writes:

“Kids are a really interesting source of knowledge and insight in the creation of new technology and digital experiences. They’re highly expressive, brutally honest, and have seamlessly integrated technology into their lives while still not fully understanding how it works. They pay close attention to the visual appeal and entertainment-value of an experience, and will very quickly lose interest if a website or app is ‘boring’ or doesn’t look quite right. They’re more prone to error when interacting with a digital experience and way more likely to experiment and play around with elements that aren’t essential to the task at hand. These aspects of children’s interactions with technology make them awesome research participants and testers when researchers structure their sessions correctly. This is no easy task however, as there are lots of methodological, behavioral, structural, and ethical considerations to take in mind while planning out how your team will conduct research with kids in order to achieve the best possible results.”

Skapyak goes on to blend and summarize decades of research on ethical guidelines, structural considerations, and methodological experiments in this field. To her credit, she starts with the command to “keep it ethical” and supplies links to the UN Convention on the Rights of the Child and UNICEF’s Ethical Research Involving Children. Only then does she launch into techniques for wringing the most shrewd insights from youngsters. Examples include turning it into a game, giving kids enough time to get comfortable, and treating them as the experts. See the article for more details on how to better sell stuff to kids and plant ideas in their heads while not violating the rights of test subjects.

Cynthia Murrell, May 17, 2023

Thinking about AI: Is It That Hard?

May 17, 2023

I read “Why I’m Having Trouble Covering AI: If You Believe That the Most Serious Risks from AI Are Real, Should You Write about Anything Else?” The version I saw was a screenshot, presumably to cause me to go to Platformer in order to interact with it. I use smart software to convert screenshots into text, so the risk reduced by the screenshot was in the mind of the creator.

Here’s a statement I underlined:

The reason I’m having trouble covering AI lately is because there is such a high variance in the way that the people who have considered the question most deeply think about risk.

My recollection is that Daniel Kahneman allegedly cooked up the idea of “prospect theory.” As I understand the idea, humans are not very good when thinking about risk. In fact, some people take risks because they think that a problem can be averted. Other avoid risk to because omission is okay; for example, reporting a financial problem. Why not just leave it out and cook up a footnote? Omissions are often okay with some government authorities.

I view the AI landscape from a different angle.

First, smart software has been chugging along for many years. May I suggest you fire up a copy of Microsoft Word, use it with its default settings, and watch how words are identified, phrases underlined, and letters automatically capitalized? How about using Amazon to buy lotion. Click on the buy now button and navigate to the order page. It’s magic. Amazon has used software to perform tasks which once required a room with clerks. There are other examples. My point is that the current baloney roll is swelling from its own gaseous emissions.

Second, the magic of ChatGPT outputting summaries was available 30 years ago from Island Software. Stick in the text of an article, and the desktop system spit out an abstract. Was it good? If one were a high school student, it was. If you were building a commercial database product fraught with jargon, technical terms, and abstruse content, it was not so good. Flash forward to now. Bing, You.com, and presumably the new and improved Bard are better. Is this surprising? Nope. Thirty years of effort have gone into this task of making a summary. Am I to believe that the world will end because smart software is causing a singularity? I am not reluctant to think quantum supremacy type thoughts. I just don’t get too overwrought.

Third, using smart software and methods which have been around for centuries — yep, centuries — is a result of easy-to-use tools being available at low cost or free. I find You.com helpful; I don’t pay for it. I tried Kagi and Teva; not so useful and I won’t pay for it. Swisscows.com work reasonably well for me. Cash conserving and time saving are important. Smart software can deliver this easily and economically. When the math does not work, then I am okay with manual methods. Will the smart software take over the world and destroy me as an Econ Talk guest suggested? Sure. Maybe? Soon. What’s soon mean?

Fourth, the interest in AI, in my opinion, is a result of several factors: [a] Interesting demonstrations and applications at a time when innovation becomes buying or trying to buy a game company, [b] avoiding legal interactions due to behavioral or monopoly allegations, [c] a deteriorating economy due to the Covid and free money events, [d] frustration among users with software and systems focused on annoying, not delighting, their users; [e] the inability of certain large companies to make management decisions which do not illustrate that high school science club thinking is not appropriate for today’s business world; [f] data are available; [g] computing power is comparatively cheap; [h] software libraries, code snippets, off-the-shelf models, and related lubricants are findable and either free to use or cheap; [i] obvious inefficiencies exist so a new tool is worth a try; and [j] the lure of a bright shiny thing which could make a few people lots of money adds a bit of zest to the stew.

Therefore, I am not confused, nor am I overly concerned with those who predict home runs or end-of-world outcomes.

What about big AI brains getting fired or quitting?

Three observations:

First, outfits like Facebook and Google type companies are pretty weird and crazy places. Individuals who want to take a measured approach or who are not interested in having 20-somethings play with their mobile when contributing to a discussion should get out or get thrown out. Scared or addled or arrogant big company managers want the folks to speak the same language, to be on the same page even it the messages are written in invisible ink, encrypted, and circulated to the high school science club officers.

Second, like most technologies chock full of jargon, excitement, and the odor of crisp greenbacks, expectations are high. Reality is often able to deliver friction the cheerleaders, believers, and venture capitalists don’t want to acknowledge. That friction exists and will make its presence felt. How quickly? Maybe Bud Light quickly? Maybe Google ad practice awareness speed? Who knows? Friction just is and like gravity difficult to ignore.

Third, the confusion about AI depends upon the lenses through which one observes what’s going on. What are these lenses? My team has identified five smart software lenses. Depending on what lens is in your pair of glasses and how strong the curvatures are, you will be affected by the societal lens, the technical lens, the individual lens (that’s the certain blindness each of us has), the political lens, and the financial lens. With lots to look at, the choice of lens is important. The inability to discern what is important depends on the context existing when the AI glasses are  perched on one’s nose. It is okay to be confused; unknowing adds the splash of Slap Ya Mama to my digital burrito.

Net net: Meta-reflections are a glimpse into the inner mind of a pundit, podcast star, and high-energy writer. The reality of AI is a replay of a video I saw when the Internet made online visible to many people, not just a few individuals. What’s happened to that revolution? Ads and criminal behavior. What about the mobile revolution? How has that worked out? From my point of view it creates an audience for technology which could, might, may, will, or whatever other forward forward word one wants to use. AI is going to glue together the lowest common denominator of greed with the deconstructive power of digital information. No Terminator is needed. I am used to being confused, and I am perfectly okay with the surrealistic world in which I live.

PS. We lectured two weeks ago to a distinguished group and mentioned smart software four times in two and one half hours. Why? It’s software. It has utility. It is nothing new. My prospect theory pegs artificial intelligence in the same category as online (think NASA Recon), browsing (think remote content to a local device), and portable phones (talking and doing other stuff without wires). Also, my Zepp watch stress reading is in the low 30s. No enlarged or cancerous prospect theory for me at this time.

Stephen E Arnold, May 17, 2023

Fake News Websites Proliferate Thanks AI!

May 16, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Technology has consequences. And in the case of advanced AI chatbots, it seems those who unleashed the tech on the world had their excuses ready. Gadgets360° shares the article, “ChatGPT-Like AI Chatbots Have Been Used to Create 49 News Websites: NewsGuard Report.” Though researchers discovered they were created with software like OpenAI’s ChatGPT and, possibly, Google Bard, none of the 49 “news” sites disclosed that origin story. Bloomberg reporter Davey Alba cites a report by NewsGuard that details how researchers hunted down these sites: They searched for phrases commonly found in AI-generated text using tools like CrowdTangle (a sibling of Facebook) and Meltwater. They also received help from the AI text classifier GPTZero. Alba writes:

“In several instances, NewsGuard documented how the chatbots generated falsehoods for published pieces. In April alone, a website called CelebritiesDeaths.com published an article titled, ‘Biden dead. Harris acting President, address 9 a.m.’ Another concocted facts about the life and works of an architect as part of a falsified obituary. And a site called TNewsNetwork published an unverified story about the deaths of thousands of soldiers in the Russia-Ukraine war, based on a YouTube video. The majority of the sites appear to be content farms — low-quality websites run by anonymous sources that churn-out posts to bring in advertising. The websites are based all over the world and are published in several languages, including English, Portuguese, Tagalog and Thai, NewsGuard said in its report. A handful of sites generated some revenue by advertising ‘guest posting’ — in which people can order up mentions of their business on the websites for a fee to help their search ranking. Others appeared to attempt to build an audience on social media, such as ScoopEarth.com, which publishes celebrity biographies and whose related Facebook page has a following of 124,000.”

Naturally, more than half the sites they found were running targeted ads. NewsGuard reasonably suggested AI companies should build in safeguards against their creations being used this way. Both OpenAI and Google point to existing review procedures and enforcement policies against misuse. Alba notes the situation is particularly tricky for Google, which profits from the ads that grace the fake news sites. After Bloomberg alerted it to the NewsGuard findings, the company did remove some ads from some of the sites.

Of course, posting fake and shoddy content for traffic, ads, and clicks is nothing new. But, as one expert confirmed, the most recent breakthroughs in AI technology make it much easier, faster, and cheaper. Gee, who could have foreseen that?

Cynthia Murrell, May 16, 2023

Architects: Handwork Is the Future of Some?

May 16, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I think there is a new category of blog posts. I call it “we will be relevant” essays. A good example is from Architizer and its essay “5 Reasons Why Architects Must Not Give Up On Hand Drawings and Physical Models: Despite the Rise of CAD, BIM and Now AI, Low-Tech Creative Mediums Remain of Vital Importance to Architects and Designers.” [Note: a BIM is an acronym for “business information modeling.”]

The write up asserts:

“As AI-generated content rapidly becomes the norm, I predict a counter-culture of manually-crafted creations, with the art of human imperfection and idiosyncrasy becoming marketable in its own right,” argued Architizer’s own Paul Keskeys in a recent Linkedin post.

The person doing the predicting is the editor of Architizer.

Now look at this architectural rendering of a tiny house. I generated it in a minute using MidJourney, a Jim Dandy image outputter.

tiny house 5 10 23

I think it looks okay. Furthermore, I think it is a short step from the rendering to smart software outputting the floor plans, bill of materials, a checklist of legal procedures to follow, the content of those legal procedures, and a ready-to-distribute tender. The notion of threading together pools of information into a workflow is becoming a reality if I believe the hot sauce doused on smart software when TWIST, Jason Calacanis’ AI-themed podcasts air. I am not sure the vision of some of the confections explored on this program are right around the corner, but the functionality is now in a rental cloud computer and ready to depart.

Why would a person wanting to buy a tiny house pay a human to develop designs, go through the grunt work of figuring out window sizes, and getting the specification ready for me to review. I just want a tiny house as reasonably priced as possible. I don’t want a person handcrafting a baby model with plastic trees. I don’t want a human junior intern plugging in the bits and pieces. I want software to do the job.

I am not sure how many people like me are thinking about tiny houses, ranch houses, or non-tilting apartment towers. I do think that architects who do not get with the smart software program will find themselves in a fight for survival.

The CAD, BIM, and AI are capabilities that evoke images of buggy whip manufacturers who do not shift to Tesla interior repairs.

Stephen E Arnold, May 16, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta