Facebook Is Nothing If Not Charming

October 5, 2020

Facebook spies on its users by collecting their personal information from hobbies, birthdays, relationships, and vacation spots. Facebook users voluntarily share this information publicly and/or privately. As a result, the company sells that information to advertisers. Facebook also spies on its competitors, but it does so in a more sophisticated way says the BBC article “Facebook Security App Used To ‘Spy’ On Competitors.”

Facebook apparently used its cross-party Onavo VPN to collect information on its competitors knowingly and in violation of anti-piracy laws. The Commons Committee discussed the incident in a report that is more than one hundred pages. Here is the gist of the report:

“The Digital, Culture, Media and Sport Committee wrote that through the use of Onavo, which was billed as a way to give users an extra layer of security, Facebook could ‘collect app usage data from its customers to assess not only how many people had downloaded apps, but how often they used them”.

The report added:

‘This knowledge helped them to decide which companies were performing well and therefore gave them invaluable data on possible competitors. They could then acquire those companies, or shut down those they judged to be a threat.”

Even more alarming are the details about ways Facebook could shut down services it provides to its competition. Twitter’s video sharing app Vine is an example of how Facebook destroyed a competitor. Twitter wanted Vine users to find friends via their Facebook accounts, but Zuckerberg nixed that idea. Vine shuttered in 2016.

Facebook does something equally nefarious with a white list of approved apps that are allowed to use Facebook user data. Among the 5,000 approved apps are Netflix, Airbnb, and Lyft. These app companies supposedly spend $250,000 on Facebook advertising to keep their coveted position.

Zuckerburg wrote in an email:

“I think we leak info to developers, but I just can’t think of any instances where that data has leaked from developer to developer and caused a real issue for us.”

There was the Cambridge Analytica scandal where voter information was collected through a personality quiz. The data of users and their friends was stolen and it profiled 82 million Americans, then that information was sold to the Cambridge Analytica company. The United Kingdom fined Facebook 500,000 pounds and the company apologized.

It will not be the first time Facebook steals and sells user information. We wonder how their competition spies on users and sells their data.

Whitney Grace, October 5, 2020

Why Software Is Getting Worse

October 5, 2020

I have noticed that certain popular applications are getting harder to use, less reliable, and increasingly difficult to remediate. Examples range from the Google Maps interface to Flipboard. I suppose those individuals who use these applications frequently find their hidden functions delightful. I just walk away.

Now there’s an explanation of sort. Navigate to “Devs Are Managing 100x More Code Now Than They Did in 2010.” As I recall, software was flakey in 2010, but if the information in the article is accurate, the slide downhill is accelerating.

The write up explains:

Some of this code growth can be explained by increasingly complex code, but much of it comes from an increase in the diversity of platforms and tools used. Modern development—particularly Web development—generally means amalgams of many different platforms, libraries, and dependencies. The developers surveyed reported increases in the number of supported architectures, devices, languages, repositories, and more.

More code, more complexity and what do you get?

Interfaces that confuse the user. Weird error messages that point to nothing comprehensible. Certified upgrades that don’t install.

Is there a fix? Sure, just like the fix for the deteriorating physical infrastructure of roads and bridges.

Talk, promises, and budget discussions.

The result? Downhill fast, folks.

Stephen E Arnold, October 5, 2020

Palantir: Planning Ahead

September 4, 2020

I read “In Amended Filing, Palantir Admits It Won’t Have Independent Board Governance for Up to a Year.” The legal tap dancing is semi-interesting. Palantir wants money and control. I understand that motive. The company — despite its sudden interest in becoming a cowboy — has Silicon Valley roots.

image

What’s fascinating is that the company was founded in 2004, although I have seen references to 2003. No big deal. Just a detail. The key point is that the company has been talking about an initial public offering for years.

The write up explains that after submitting an S-1 form to the Securities & Exchange Commission, Palantir submitted a revised  or amended S-1. For a firm which provides intelware and policeware to government agencies, planning and getting one’s ducks in a row seem to be important attributes.

Did Palantir just dash off the first S-1 at Philz Coffee? Then did some bright young stakeholder say, “Yo, dudes, we need to make sure we keep control. You know like the Zuck.”

After 16 years in business and burning through a couple of tractor trailers filled with cash, it seems untoward to submit a revision hard on the heels of an SEC S-1 filing.

Careless, disorganized, or what the French call l’esprit d’escalier strikes me as telling.

Observations:

  1. The resubmission suggests carelessness and flawed management processes
  2. The action raises the question, “Are these Silicon Valley cowboys getting desperate for an exist?”
  3. For a low profile outfit engaged in secret work for some of its clients, public actions increase the scrutiny on a company which after a decade and a half is not profitable.

Interesting behavior from from Palantirians. Did the seeing stone suffer a power outage?

Stephen E Arnold, September 4, 2020

Amazon: Employee Surveillance and the Bezos Bulldozer with DeepLens, Ring, and Alexa Upgrades

September 4, 2020

Editor’s Note: This link to Eyes Everywhere: Amazon’s Surveillance Infrastructure and Revitalizing Worker Power may go bad; that is, happy 404 to you. There’s not much DarkCyber can do. Just a heads up, gentle reader.

The information in a report by Open Markets called Amazon’s Surveillance Infrastructure and Revitalizing Worker Power may be difficult to verify and comprehend. People think of Amazon in terms of boxes with smiley faces and quick deliveries of dog food and Lightning cables.

image

Happy Amazon boxes.

The 34 page document paints a picture of sad Amazon boxes.

image

The main point is that the Bezos bulldozer drives over employees, not just local, regional, and national retail outlets:

A fundamental aspect of its power is the corporation’s ability to surveil every aspect of its workers’ behavior and use the surveillance to create a harsh and dehumanizing working environment that produces a constant state of fear, as well as physical and mental anguish. The corporation’s extensive and pervasive surveillance practices deter workers from collectively organizing and harm their physical and mental health. Amazon’s vast surveillance infrastructure constantly makes workers aware that every single movement they make is tracked and scrutinized. When workers make the slightest mistake, Amazon can use its surveillance infrastructure to terminate them.

Several observations:

  1. Amazon is doing what Amazon does. Just like beavers doing what beavers do. Changing behavior is not easy. Evidence: Ask the parents of a child addicted to opioids.
  2. Stakeholders are happy. Think of the the song with the line “money, money, money.”
  3. Amazon has the cash, clout, and commitment to pay for lobbying the US government. So far the President of the United States has been able to catch Amazon’s attention with a JEDI sword strike, but that’s not slowed down Darth Jeff.

Net net: After 20 plus years of zero meaningful regulation, the activities of the Bezos bulldozer should be viewed as a force (like “May the force be with you.”) DarkCyber wants to point out that Amazon is also in the policeware business. The write up may be viewed as validation of Amazon’s investments in this market sector.

Stephen E Arnold, September 4, 2020

Technical Debt: Nope, It Exists and That Debt Means Operational Poverty, Then Death

August 28, 2020

Technical Debt Doesn’t Exist” is an interesting view of software. The problem is that “technology” is not just software. The weird behavior of an Adobe application like Framemaker can be traced to the program’s Unix roots. But why, one asks, is it so darned difficult to manage colors in a program intended to print documents with some parts in color? What about the mysterious behavior of Windows 10 when a legal installation collects $0.99 cents for an HEVC codec only to report that the codec cannot be installed? What about the enterprise application from OpenText cannot display a document recently displayed to the user of the content management system? Are these problems due to careless programming?

According to the article:

There is no such thing as technical debt. There is work to do, that we can agree on, but it’s not debt payment.

The punch line for the write up is that technical debt is just maintenance.

Let’s think about this.

The constraints of Framemaker result from its Unix roots. Now decades later, those roots still exist. Like the original i2 Analyst’s Notebook (a policeware system), some functions were constrained by the lovely interaction of the hardware, the operating system, and the code. The Unix touches remain today: Enter Escape O P C and the list of styles pop up. Yep, commands from 40 years ago are still working and remain inscrutable to anyone trying to learn the program. Why aren’t there changes? Adobe tried and ended up with InDesign. I would suggest that the cost of “fixing up” Framemaker were too high if Adobe could corral engineers who could do the job. Framemaker, therefore, is still around, but it is an orphan and a problematic one at that.

What about Microsoft and a codec? The fact that Microsoft makes a free version available for a person willing to put in the time to locate the HEVC download is one thing. Charging $0.99 for a codec which cannot be installed is another. Figuring out the unknown and unanticipated interactions among video hardware, software in the Windows 10 fun house, and third-party software is too expensive. What’s the fix? Ignore the problem. Put out some marketing baloney and tell the human doing customer support to advise the person with the failed codec to reinstall Windows. Yeah, right. A problem exists that will be around for exactly as long as there is Windows 10.

What about the OpenText content management system? We encountered this problem when trying to figure out why users of the system could not locate a file which had been saved the previous day. We poked around the hardware; we poked around the content management system; we poked around the search system which turned out to be an Autonomy stub. Yep, Autonomy search was “in” the OpenText system. The issue was the interaction of the Autonomy search system first crafted in the late 1990s, the content management system which OpenText bought from a vendor, and the hardware used to run the system. Did OpenText care? Nope, not at all. Open a file and wait 15 minutes. And what about the missing file? Updates sat in a queue and usually took place a couple of days after the Save command was issued. The fix? Ho ho ho.

Let me be clear: When a system is coded and it sort of works, that system is deployed. If a problem surfaces quickly, the vendor will have someone fix it. If it is a big problem, maybe two or three people will work on the issue. Whatever must be done to get the phone to stop ringing, the email to stop arriving, and angry customers to stop having their lawyers write nasty grams will be done. Then it is over. No one will go back and figure out what went wrong, make fixes, and dutifully put the ship in proper shape. The mistake is embedded in digital amber and the “fix” is part of the woodwork. How often do you look at the plumbing connections from the outside water line to your hot water heater. What happens when there’s a leak? A fix is made and then forget it.

What about technical debt? The behaviors I have described mean that systems persist through time. The systems are not refactored or “fixed”. The systems are just patched. Amazon enshrines this process in its two pizza teams. And how about the documentation for the fixes made on Saturday morning at 3 am? Ho ho ho.

Let me offer some observations:

  1. Significant changes to software today are mostly cosmetic, what I call wrappers. The problems remain but their pointy parts are blunted.
  2. The cost of making fundamental changes are beyond the reach of even the largest and most resource rich organizations.
  3. The humans required to figure out where the problem is and make structural changes are almost impossible for most technologies.

The article calls this maintenance. I think that’s an okay word, but the reality is that today’s software, particular software based on recycled libraries, existing systems accessed via application programming interfaces, and hardware with components with checkered or unknown pasts are not going to be “fixed.”

We live in an era of “good enough.”

The technical debt is going to catch up to those who sell and develop products. Users are already paying the price.

What happens if one pushes technical debt into tomorrow or next week?

That’s an easy question to answer. The vaunted “user experience” becomes more like a carnival act while the behind the scenes activity is less and less savory. How about those mandatory updates which delete photos, kill a Mac desktop, or allow a mobile phone to go dead because of a bug? The new normal.

It’s just maintenance. We know how much bean counters like to allocate cash for maintenance. Operational poverty, then the death of innovation.

Stephen E Arnold, August 28, 2020

Thinking about Risk: No Clip On Bow Tie

August 15, 2020

I read “Risk Bow Tie Method.” I worked through the write up, which reminded me of a reading in one of those professor-assembled Kinko’s books students HAD to purchase. The focus is a management procedure for thinking about risk. Today, there are some interesting topics which MBA study groups can consider on a thrilling Zoom call. As I examined the increasingly detailed diagrams, the procedure seemed familiar. I ratted through my files and, yes, I had a paper (maybe I snagged it at a non-Zoom conference in England in the 1990s) called “Lessons Learned from Real World Application of Bow Tie Method.” There’s version of this document available at this link.

The idea is that something happens like Covid, serial financial crashes, social unrest, private enterprise controlling information flows, etc. None of these is too serious. The idea is to make a diagram that looks like this one from the 1990s Risktec person’s write up:

image

If you want to be a consultant, you need a diagram without explanations. The idea is to bring discipline to a group of people who would rather check out TikTok videos, browse Facebook, or fiddle with their Robinhood account. But a job is a job, whether in person or on a Zoom call.

The advisor systematically works through the “logic” of figuring out the issues related to the minor risk an organization faces; for example, an enterprise search vendor failing to meet its financial goal for the quarter as cash burns and employees “work” from home. Yep, fill in that logical diagram.

Exercises like this are a gold mine to a consulting firm. Blue chip outfits focus on these “big picture” methods. Mid tier consulting firms and sol practitioners with a Wix Web site and Instaprint plastic stick on sign for their automobile may have trouble landing enough work to pay for working through the Bow Tie Method.

So blue chip consulting firms embrace these types of fill-in-the-blank exercises. The consultant gets to “know” the participants and can set the stage for recruiting an insider to function as a cheerleader absent pom poms. The “report” allows the consulting team to identify which options are better for the company with the data presented created by the … wait for it … the employees who participated in the Bow Tie Method process. To be fair, the consulting team has to create a PowerPoint or similar presentation. Some consulting firms just write an “Executive Memo” and move to selling follow on work.

I must admit I thought of the popular song by Stevie Wonder with these lyrics. Note: I modified the last line to match my reaction to the attempted rejuvenation of the Bow Tie Method:

His father works some days for fourteen hours
And you can bet he barely makes a dollar
His mother goes to scrub the floor for many
And you’d best believe she hardly gets a penny
Living just enough, just enough for the consulting.

Several observations:

  1. Is the Bow Tie Method the correct one for our interesting times? Plug in Covid, fill in the boxes, discuss options, and what do you end up with?
  2. Is the Bow Tie Method and other thought frameworks matched to today’s management climate? Twitter, Facebook Google, Amazon, and other FAANG outfits create risks, and I am not convinced that objective consideration of the risks to these organizations are top of mind for the top managers at this time. It seems as if the consulting frameworks have to be designed for thumbtypers and consumers of Instagram and Snap apps, not old-school frameworks from who knows where.
  3. The time and cost to work through a full Bow Tie Methods may increase risk for the company. Here’s how that works. The leadership of a company or country changes direction. Mixer from Microsoft. Hey, kill that dog. A Google API? No reason to provide that any more. A tweet from the White House changes the social media influencer landscape. As these here-and-now events blaze on digital devices, the time for the Bow Tie and the time for dealing with here-and-now is out of joint.

Net net: Traditional consulting methods, regardless of the fancy graphics and with-it explanations seem to be like exhibits in the British Museum. Who knew the Elgin marbles were sitting in a dark room?

Stephen E Arnold, August 15, 2020

Free Content: Like Technology, Now a Political Issue

August 3, 2020

Free content is interesting. It seems to represent a loss when compared to content that costs money. But are these two options the only ones? Nope, digital information has a negative cost. I think that’s a fair characterization of the knowledge road many are walking.

For seven years, I have produced “content” and made it available without charge to law enforcement and intelligence professionals in the US and to US allies. When I embarked on this approach, I met with skepticism and questions like “What’s the catch?”

I learned quickly that “free” means hook, trick, or sales ploy. Intrigued by the reaction, I persisted. Over time, my approach was — to a small number of people — somewhat helpful. In a few weeks, I will be 77, and I don’t plan on changing what I do, terminating the researchers who assist me, and telling those who want me to give a talk or write up a profile about one of the companies I follow to get lost.

I thought about my approach when I read “The Truth Is Paywalled But The Lies Are Free.” The title annoyed me because what I do is free. I could identify an interesting organization which has recently availed itself of one of my free reports. My team and I tried to assemble hard-to-find and little known information and package it into a format that was easy-to-understand. Yep, the document was free, and it has found its way into several groups focused on chasing down bad actors.

The write up in Current Affairs, now an online information service, states:

This means that a lot of the most vital information will end up locked behind the paywall… The lie is more accessible than its refutation.

I think I understand. The majority of free content has spin. For fee content is, therefore, delivered with less spin or without spin.

Is this true?

The reports I prepare describe specific characteristics of a particular technology. In my opinion and that of the researchers who assist me, we make an effort to identify consistent statements, present information for which there is a document like a technical specification, and use cases that are verifiable.

I suppose the fact that I maintain profiles of companies of little interest to most “real” journalists and pundits creates an exception. What I do can be set aside with the comment, “Yeah, but who really cares about the Polish company Datawalk?”

The write up states:

More reason to have publications funded by the centralized free-information library rather than through subscriptions or corporate sponsorship. Creators must be compensated well. But at the same time we have to try to keep things that are important and profound from getting locked away where few people will see them. The truth needs to be free and universal.

I think I see the point. However, my model is different. The content I produce is a side product of what I do. If someone pays me to produce a product or service, I use that money to keep my research group working.

Money can be generated and a portion of it can be designated to an information task. The challenge is finding a way to produce money and then allocating the funds in a responsible way. Done correctly, there is no need to beg for dollars, snarl at Adam Carolla for selling a “monthly nut,” or criticize information monopolies.

These toll booths for information are a result of choice, a failure of regulatory authorities, the friction of established institutions that want “things the way they have to be” thinking, and selfish agendas.

In short, the lack of high value “free” information is distinctly human. I want to point out that even with information paywalls, there are several ways to obtain useful information:

  1. Talking to people, preferable in person but email works okay
  2. Obtaining publicly accessible documents; for example, patent applications
  3. Comments posted in discussion groups; for instance, the worker at a large tech company who lets slip a useful factoid
  4. Information recycled by wonky news services; for example, the GoCurrent outfit.

The real issue is that “free” generally triggers fear, doubt, and uncertainty. Paying for something means reliable, factual, and true.

Put my approach aside. Step back from the “create a universal knowledge bank which anyone can access. Forget the paying the author angle.

High-value information exists in the flows of data. Knowledge can be extracted from deltas; that is, changes in the data themselves. The trick is point of view and noticing. The more one tries to survive by creating information, the more likely it is that generating cash will be difficult if not impossible.

Therefore, high value content can be the result of doing other types of a knowledge work. Get paid for that product or service, then generate information and give it away.

That’s what I have been doing, and it seems to work okay for me. For radicals, whiners, monopolists fearful of missing a revenue forecast — do some thinking, then come up with a solution.

What’s going on now seems to be a dead end to me. Ebsco and LexisNexis live in fear of losing a “big customer.” Therefore, prices go up. Fewer people can afford the products. The knowledge these companies have becomes more and more exclusive. I get it.

But what these firms and to some extent government agencies which charge for data assembled and paid for with tax payer dollars are accelerating intellectual loss.

The problem is a human and societal one. I am going to keep chugging along, making my content free. I think the knowledge economy seems to be one more signal that the datasphere is not a zero sum game. Think in terms of a negative number. We now have a positive (charging for information), free (accessing information for nothing), and what I call the “data negative” or D-neg (the loss of information and by extension being “informed”).

In my experience, D-neg accelerates stupidity. That’s a bigger problem than many want to think about. Arguing about the wrong thing seems to be the status quo; that is, generating negatives.

Stephen E Arnold, August 3, 2020

Alleged Business Practices of the Rich and Worshipped or Ethics R Us

July 28, 2020

DarkCyber spotted two separate stories which address a common theme. The write ups are “new age” news, so allegations, speculation, and political perspectives infuse the words used in each of these. Nevertheless, both write ups merit noting because two points are useful when a trend line may lurk in the slope between the dots.

The first article is “Google Spying on Users’ Data to Learn How Rival Apps Work: Report.” The article asserts:

Google is reportedly keeping tabs to how its users interact with rival Android apps, selectively monitoring how the users interact with non-Google apps via an internal program to make its own products better.

The article jumps to Google’s unique ability to see lots of data from its privileged position of being involved in each facet of certain markets: Channel, partner, vendor, developer, and customer. The operative word in the title is “spying,” but the issue is ethical and socially responsible behavior. Some science club members want access to the good stuff in the electronics supply door. Hey, cool.

The second write up is about everyone’s favorite online retailer, cloud vendor, and services firm. DarkCyber thinks the logo of Amazon should be the Bezos bulldozer. It landscapes the way it wants. “Amazon Reportedly Invested in Startups and Gained Proprietary Information before Launching Competitors, Often Crushing the Smaller Companies in the Process” is one of those stories whose title is the story. We noted this passage in the write up as additive:

Amazon met with or invested in their companies, only to later build its own products that directly competed with the smaller company.

Let’s assume that these write ups are mostly accurate. The behaviors are untoward because those duped, bilked, fooled, or swindled assumed that those across the table were playing with an unmarked deck and wanted an honest game.

DarkCyber sees the behavior as similar to a “land grab.” As long as there is minimal anti monopoly enforcement and essentially zero consequences in a legal process, the companies identified in these write ups can do what they want. DarkCyber thinks that the behaviors are institutionalizes; that is, even with changes in senior management and regulatory oversight, the organizations will, like a giant autonomous mine truck, just keep rolling forward. When the truck rolls over a worker, collateral damage. That’s how life works in the gee whiz world of high technology.

Stephen E Arnold, July 28, 2020

Zoom, Zoom, Meet, Meet, and Trust, Well?

July 24, 2020

We evolved to be social creatures—long, long before Zoom or MS Teams existed. That is why, as Canada’s CBC declares, “Video Chats Short Circuit a Brain Function Essential for Trust—and That’s Bad for Business.” Journalist Don Pittis writes:

“Canadian research on ‘computer-mediated communication,’ begun long before the current lockdown, shows video chat is an inadequate substitute for real-life interaction. The real thing, dependent on non-verbal cues, is extraordinarily more effective in creating rapport and getting ideas across. Not only that, but the familiarity and trust we currently feel with coworkers during the lockdown’s remote calls rests on connections remembered from back when we sat at a nearby desk or met for lunch. As the lockdown stretches out and the mix of colleagues changes, it may be almost impossible to establish healthy trusting working relationships using remote video chat tools alone. That’s bad for business, said organizational behavior specialist Mahdi Roghanizad from Ryerson University’s Ted Rogers School of Business. The reason: getting a good reading on your fellow workers has been repeatedly shown to be essential for business efficiency, reaching common goals and establishing trust. It is why teams that worked remotely even before the pandemic lockdown always met periodically in person. The latest research shows human-to-human bonding is like a kind of intuitive magic.”

Researchers suggest several reasons for this “magic,” including pheromones, body language, and in-person eye contact. Some have found it is harder to detect when someone is lying across video. One social scientist, the University of Waterloo’s Frances Westley, likens video chat to talking with someone wearing sunglasses—it is less satisfying, and can even sap our energy.

For all these reasons, Pittis suspects the supposed work-from-home “revolution” may not last, as many had predicted. Businesses may find it more productive to summon workers back to the office once the danger is gone. In the meantime, Westley suggests, we should reinforce connections with the occasional (socially distanced, mask-augmented) in-person conversation.

Cynthia Murrell, July 24, 2020

Close Enough for Horse Shoes? Why Drifting Off Course Has Become a Standard Operating Procedure

July 14, 2020

One of the DarkCyber research team sent me a link to a post on Hacker News: “How Can I Quickly Trim My AWS Bill?” In the write up were some suggestions from a range of people, mostly anonymous. One suggestion caught my researcher’s attention and I too found it suggestive.

Here’s the statement the DarkCyber team member flagged for me:

If instead this is all about training / the volume of your input data: sample it, change your batch sizes, just don’t re-train, whatever you’ve gotta do.

Some context. Certain cloud functions are more “expensive” than others. Tips range from dumping GPUs for CPUs to “Buy some hardware and host it at home/office/etc.”

I kept coming back to the suggestion “don’t retrain.”

One of the magical things about certain smart software is that the little code devils learn from what goes through the system. The training gets the little devils or daemons to some out of bed and in the smart software gym.

However, in many smart processes, the content objects processed include signals not in the original training set. Off the shelf training sets are vulnerable just like those cooked up by three people working from home with zero interest in validating the “training data” from the “real world data.”

What happens?

The indexing or metadata assignments “drift.” This means that the smart software devils index a content object in a way that is different from what that content object should be tagged.

Examples range from this person matches that person to we indexed the food truck as a vehicle used in a robbery. Other examples are even more colorful or tragic depending on what smart software output one examines. Detroit facial recognition ring a bell?

Who cares?

I care. The person directly affected by shoddy thinking about training and retraining smart software, however, does not.

That’s what is troubling about this suggestion. Care and thought are mandatory for initial model training. Then as the model operates, informed humans have to monitor the smart software devils and retrain the system when the indexing goes off track.

The big or maybe I should type BIG problem today is that very few individuals want to do this even it an enlightened superior says, “Do the retraining right.”

Ho ho ho.

The enlightened boss is not going to do much checking and the outputs of a smart system just keep getting farther off track.

In some contexts like Google advertising, getting rid of inventory is more important than digging into the characteristics of Oingo (later Applied Semantics) methods. Get rid of the inventory is job one.

For other model developers, shapers, and tweakers, the suggestion to skip retraining is “good enough.”

That’s the problem.

Good enough has become the way to refactor excellence into substandard work processes.

Stephen E Arnold, July 14, 2020

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta