Content Control: More and More Popular
December 7, 2021
A couple recent articles emphasize there is at least some effort being made to control harmful content on social media platforms. Are these examples of responsible behavior or censorship? We are not sure. First up, a resource content creators may wish to bookmark—“5 Banned Content Topics You Can’t Talk About on YouTube” from MakeUseOf. Writer Joy Okumoko goes into detail on banned topics from spam and deception to different types of sensitive or dangerous content. Check it out if curious about what will get a YouTube video taken down or account suspended.
We also note an article at Engadget, “Personalized Warnings Could Reduce Hate Speech on Twitter, Researchers Say.” Researchers at NYUs Center for Social Media and Politics set up Twitter accounts and used them to warn certain users their language could get them banned. Just a friendly caution from a fellow user. Their results suggest such warnings could actually reduce hateful language on the platform. The more polite the warnings, the more likely users were to clean up their acts. Imagine that—civility begets civility. Reporter K. Bell writes:
“They looked for people who had used at least one word contained in ‘hateful language dictionaries’ over the previous week, who also followed at least one account that had recently been suspended after using such language. From there, the researchers created test accounts with personas such as ‘hate speech warner,’ and used the accounts to tweet warnings at these individuals. They tested out several variations, but all had roughly the same message: that using hate speech put them at risk of being suspended, and that it had already happened to someone they follow. … The researchers found that the warnings were effective, at least in the short term. ‘Our results show that only one warning tweet sent by an account with no more than 100 followers can decrease the ratio of tweets with hateful language by up to 10%,’ the authors write. Interestingly, they found that messages that were ‘more politely phrased’ led to even greater declines, with a decrease of up to 20 percent.”
The research paper suggests such warnings may be even more effective if they came from Twitter itself or from another organization instead of their small, 100-follower accounts. Still, lead researcher Mustafa Mikdat Yildirim suspects:
“The fact that their use of hate speech is seen by someone else could be the most important factor that led these people to decrease their hate speech.”
Perhaps?
Cynthia Murrell, December 7, 2021
More AI Foibles: Inheriting Biases
December 7, 2021
Artificial intelligence algorithms are already implemented in organizations, but the final decisions are still made by humans. It is fact that algorithms are unfortunately programmed with biases towards minorities and marginalized communities. It might appear that these are purposefully built into the AI, it is not. The problem is that the AI designers lack sufficient diverse data to feed algorithms. Biases are discussed in The Next Web’s article, “Worried About AI Ethics? Worry About Developers’ Ethics First.”
The article cites Asimov’s famous three laws of robotics and notes that ethics change depending on the situation and human individual. AI are unable to distinguish these variables like humans, so they must be taught. The question is what ethics are AI developers “teaching” to their creations.
Autonomous cars are a great example, because they rely on human and AI input to make decisions to avoid accidents. Is there a moral obligation to program autonomous cars to override a driver’s decision to prevent collisions? Medicine is another worrisome field. Doctors still make critical choices, but will AI remove the human factor in the not too distant future? There are also weaponized drones and other military robots that could prolong warfare or be hacked.
The philosophical trolley problem is cited, followed by this:
People often struggle to make decisions that could have a life-changing outcome. When evaluating how we react to such situations, one study reported choices can vary depending on a range of factors including the respondent’s age, gender and culture.
When it comes to AI systems, the algorithms training processes are critical to how they will work in the real world. A system developed in one country can be influenced by the views, politics, ethics and morals of that country, making it unsuitable for use in another place and time.
If the system was controlling aircraft, or guiding a missile, you’d want a high level of confidence it was trained with data that’s representative of the environment it’s being used in.”
The United Nations has called for a “a comprehensive global standard-setting instrument” for a global ethical AI network. It is a step in the right direction, especially when it comes to ethnic diversity problems. AI that does not take into account eye shape, skin color, or other biological features are understandably overlooked by developers without them. These can be fixed with broadened data collections.
A bigger problem would be differentials between sexes and socioeconomic background. Women are viewed as less than second class citizens in many societies and socioeconomic status determines nearly everything in all countries. How are developers going to address these ethical issues? How about a deep dive with a snorkel to investigate?
Whitney Grace, December 7, 2021
A Digital Don Quixote Saddles Up and Sallies Forth
December 6, 2021
I read ”Apple Takes Russia to Court over App Store Ruling.” Wow, not since my high school days have I encountered such an enchanting slice of fiction. The guide book to weird behavior and classical Spanish grammar is, without a doubt in my mind, Don Quixote. The 17th century Tim Apple is an upper class type who gets lost in a make believe world and proceeds to attack windmills. (Did you know a classmate who pronounced the title of the cherished Baekecker of wackiness as “quick oat”?)
The modern day gallant is going after Mother Russia, currently piloted by the warm, colorful lover of cuisine from the Ukraine. The write up reports:
Both 9to5Mac and RT report Apple is asking for a judicial review of a Federal Antimonopoly Service warning from August that allows developers to mention alternatives to the App Store’s in-app payment system. FAS gave Apple until September 30th to alter its policies, but the company declined to change its rules despite the threat of a fine. The opposition parallels Apple’s legal battles in the US. The judge in Epic’s lawsuit against Apple ordered the tech firm to let App Store developers point to other payment systems, but Apple appealed the injunction in hopes of a delay. The court denied Apple’s request, and the company will have until December 9th to let app makers point to other options. Apple will make exceptions to its policy for some media apps in 2022.
As I understand the links and the text above, Apple will follow Tim, its leader, into battle with a Russian institution of note, the group which prevents companies from having a monopoly in a country with a lot of time zones.
As you may recall. Don Quixote is pretty much crazy. The brave hildago catches something, maybe an early form of Covid, wakes up and is not crazy. There’s some 17th century soap opera maudlin wallowing and finally, thank goodness, the Don dies after returning to normal. Maybe?
Flash forward to the present. Tim Apple is going after the windmill of Russia’s monopoly regulator. How’s this work out? My prediction is that a Russian river tour as an off-site for exceptional app performance is not likely to be a particularly fun trip. It can be nippy in Moscow at this time of the year.
Stephen E Arnold, December 6, 2021
Smart Software and 100 Lessons
December 6, 2021
“100 Lessons from 1 Year of AI Research” breaks down into learning 8.3 “things” a month or 0.27 “things” a day. What’s interesting is that the list suggests that this learning pace is not cumulative; that is, learning does not appear to slow down. If anything, the list suggests that some “insights” cannot be learned, possibly may never be learned; for instance, lesson 24: “Ensure a strong mastery of foundations.” Yes, master.
Here’s another example. Lesson 9:
Try to work towards significant innovations instead of delta improvements that generate only little or negligible insights.
Does this imply that today’s smart software lacks a broader vision? Incrementalism implies that modifications are situational and their cumulative or system wide implications are not known or understood?
Keep in mind that the essay contains 99 other lessons, and as I worked through the list, three points struck me:
- Smart software is single-person centric; that is, what a laborer in the vineyards of artificial intelligence is doing takes place in a bubble. What sets AI apart is that the work product can affect the entire vineyard and maybe the wine industry itself. The best part is that no one knows that this is happening.
- Cutting corners and being “cute” play a part — maybe the major role — in smart software development.
- Join an AI / ML cabal. There is safety and work if one is part of the in crowd.
Pretty interesting. Now how about a list from someone who has been pitching biased algorithms for smart ad sales for three years. What’s that list look like? Maybe no entries or just one: Do what’s needed to get a bonus and promoted. By the way, I try to learn one “thing” per day. Here’s an example: Dr. Timnit Gebru has quite a bit to teach the AI crowd.
Stephen E Arnold, December 6, 2021
Beyond LinkedIn: Crypto and Blockchain Job Listings
December 6, 2021
Here is an important resource for anyone seeking employment in the budding fields of crypto currency and blockchain technology—CryptoJobsList. The site currently hosts over four thousand opportunities for crypto currency and blockchain professionals. Each listing specifies at a glance the employer, the location (many are remote), how long the post has been up, how many applicants it has gotten, and whether it pays in crypto currency. Clicking on each, of course, leads to more details and an application link. Scrolling all the way to the bottom of the page reveals options to browse by role or by location. There is also a link for employers seeking workers; listings cost a mere $1.99 each. Between the listings and those features, founder Raman Shalupau shares a few words about his site:
“I’ve started this job board back in end of September 2017, when I was looking for engineering jobs in crypto currency companies myself. I had to jump from site to site, looking for positions in various exchanges, wallets, and research projects. Opportunities were scattered all over the place and pretty hard to come by. So I thought it would be cool to have a centralized (the irony) site with all the positions. I thought no one will care about the job board and it’ll die off in a week, but, apparently more and more people cared enough about it to start applying to jobs, sharing Crypto Jobs List with friends and, of course, companies started listing their job posts. Today I hope you are enjoying the site, applying to jobs and getting response from hundreds of crypto startups that have listings on CJL to day. I strongly believe that blockchain technology and crypto currencies are still in their infancy stages, almost like the internet in 1990s. The ‘Facebooks’ and ‘Googles’ of crypto-era are yet to be founded and I believe that the only way to grow this industry is to stop checking coin prices every morning, and start building the technology, products and companies that will fuel the coin market growth.”
The author goes on to explain the differences between the terms blockchain, crypto currency, and crypto, so check that out if the distinctions are still murky to you. In terms of employment, “blockchain” positions can involve a more broad range of applications, like supply chains for example. Jobs in “crypto currency” tend to be at crypto currency-focused startups. If Shalupau is correct and the crypto field is still in its infancy, this site could lead to one’s chance to get in on the ground floor.
Cynthia Murrell, December 6, 2021
AI-Powered Alternative to Polygraph Emerging out of Israel
December 6, 2021
Will AI eventually replace the polygraph in discerning truth from falsehood? The Times of Israel suggests we may be heading that direction in, “Liar, Liar! ‘Reading’ Faces, Israeli Tech Spots Fibbers with 73% Accuracy.” The emerging technology is the project of a team at Tel Aviv University. Writer Nathan Jeffay reports:
“Israeli scientists say they have found a way to ‘read’ minuscule movements in the face in order to spot fibbers, and have done so with 73 percent accuracy. With highly sensitive electrodes placed to detect the smallest of movements by facial muscles, the researchers got their subjects to either speak truthfully or lie. They fed details on the patterns of those facial movements into an artificial intelligence tool, and taught it to determine whether other people are lying or telling the truth. Now, they are aiming to teach the AI tool to analyze face movements without electrodes. Instead, they want to develop the tech to follow faces in order to determine truthfulness via cameras — which could enable them to spot a liar from dozens of meters away.”
A 73% accuracy rate would leave a lot of room for false accusations. It is considerably smaller than the estimated 87% accuracy rate of polygraph tests (a figure that is itself contested). Researchers promise, however, accuracy will improve as development continues. The approach, we’re told, has a significant advantage over polygraphs, which some subjects can fool by regulating their heart rate, blood pressure, and breathing. Regarding the examination of facial muscles instead, researcher Kino Levy states:
“We knew before now that facial expressions that are manifested by contractions in face muscles represent various emotions. … But up until now when people tried to identify these small movements in face muscles, we can’t do—our brains and our perception aren’t fast or sophisticated enough to pick up these tiny movements in the face. Many studies have shown that it’s almost impossible for us to tell when someone is lying to us. Even experts, such as police interrogators, do only a little better than the rest of us.”
This specially tailored AI, however, can accurately interpret these movements; 73% of the time, anyway. Levy insists his team’s technology will be a game changer. Once they have been able to improve accuracy, of course.
And here’s a question for Israeli companies with specialized software, “Are your systems used to hack American elected officials?”
Cynthia Murrell, December 6, 2021
A New Word Dorseying: Leaving Before the Fried Turkey Explodes
December 3, 2021
Full disclosure. We post Beyond Search tweets to Twitter. We use a script, and we use an account set up years ago. I don’t recall who on my team did this work, and I am not sure I know the password. We did this as a test for one of my lectures to a group of law enforcement and intelligence professionals to illustrate how a content stream could be implemented with zero fuss and muss. The mechanism is similar to the ones used by certain foreign entities to inject content into the Twitter users’ content pool.
Why’s this important?
Twitter is a coterie service; that is, the principal users are concentrated on the left and right coasts of the US. The service meets the needs of this group because tips, facts, and observations about technology and its world are essential to the personas of the most enthusiastic tweet generators. There are secondary and tertiary uses as well. Spectrum pretends to care when its customers point out yet another service outage. Political big sparklers generate outputs for their constituents. Vendors of diet supplements find the service helpful as well.
But Twitter, like other social media services, is in the spotlight. The trucks carting these high intensity beams are driven by wild eyed and often over enthusiastic elected officials and laborers in the gray and beige government cubicles.
Write ups like “Twitter Has a New CEO; What About a New Business Model?” and “Twitter Bans Sharing Private Images and Videos without Consent” provide purported insight into the machinations of the new Twitter. But the main point is that Twitter allows humans and smart software to create personas and push content to others in the tweetiverse.
Dorseying means that one individual is getting out of Dodge before the law arrives. This exit is less elegant than the proactive departure of Messrs. Brin and Page from the Google. From my vantage point, the former big dog of the Tweeter wants to be undisturbed and work in less well illuminated locations. Is Dorseying an action similar to running away from trouble? Interesting question.
Can Twitter be enhanced, fixed, or remediated?
My view is that anonymous and easily created “accounts” required some thought. The magic of censorship is likely to be less impactful than short lived special effects in the early Disney films. (Does anyone remember the cinegraphic breakthrough of “sparkles”?) The amping up of advertising is likely to lead to a destination that many have previously visited; that is, one with carefully crafted paths, exhibits, attractions, and inducements to buy, buy, buy.
Net net: Twitter, like other social media, will be difficult to control. My hunch is that the service will continue to snip through social fabrics. Because Twitter is a publicly traded company, management has to respond to the financial context in which it operates. Fancy talk, recommendations, and half hearted editorial measures may have unintended consequences. That’s what concerns me about the tweeter thing.
Dorseying was a good move.
Stephen E Arnold, December 3, 2021
Competition: Who Wants It? Not Monopolies or Legacy Software Outfits
December 3, 2021
Legacy software companies were once the toast of the tech industry, but now they set the standards and procedures followed by everyone. They have been in power so long they do not like to give it up to newer competition. The Irish Times shares the opinions of the competition: “Tech Groups Say SAP, Oracle, And Microsoft Are Unfair Digital ‘Gatekeepers.’”
Tech groups in the European Union have dubbed Microsoft, Oracle, SAP, and other legacy software companies “gatekeepers” that prevent competition. The tech groups sent a letter to the EU via Bloomberg News about the Digital Markets Act. They argue that the act “falls short” of addressing software providers “unfair” licensing practices. The legacy companies are locking customers in and self-preferring themselves when they offer cloud services.
The current Digital Markets Act currently deals more with big US companies:
“The Digital Markets Act currently focuses on anti-competitive behaviors by social media and online marketplace companies like Meta Platforms Inc.’s Facebook; Alphabet’s Google and Amazon. com. While that has prompted criticism that European lawmakers are targeting US-based companies, some parliamentarians are trying to broaden the rules to include more companies like software providers.
The groups seeking the inclusion of software platforms represent more than 2,500 chief innovation officers and almost 700 businesses organizations in the four EU countries, including L’Oreal, Zalando and Volkswagen.”
One would think that the new Digital Markets Act would address every tech company as a blanket law instead of focusing on a specific few. The EU should draft a law that blocks all gatekeeping practices and opens the market for competition. Legacy software companies do have a lot of money and clout, but that does not mean they write the law.
Whitney Grace, December 3, 2021
Not Search or AI But an Example of Trend Spin
December 3, 2021
Consumers are becoming more environmentally conscious and it is slowly working its way into the world of fast fashion. Technology companies already realize, “Circular Business Models Offer A 700 Billion US Dollar Opportunity” says Fashion United. The Ellen MacArthur Foundation shares that circular business models will grow from 3.5% to 23% by 2030 with a $700 billion opportunity.
The rental, resale, repair, and remaking industries are already worth $73 billion and they are projected to grow more. Currently the fashion industry does not practice the circular models. Some brands offer “add-ons” such as discounts for recycled clothes or free items. This increases production rather than decreasing it. Rental companies do not always offer clothes designed to withstand multiple uses.
The fashion industry must adopt the circular model as part of its system rather than treating it as a limited extension:
“To rethink performance indicators, customer incentives, and customer experiences by shifting to a business model based on increasing the use of products, rather than producing and selling more products. This it adds requires businesses to rethink how it measures success, and to encourage its customers to opt for its circular offering through carefully designed incentives and enhanced customer experiences. The second action point is for all brands and retailers to design products that can be used more and for longer. As to maximize the economic and environmental potential of circular business models, products need to be designed and made to be physically durable, emotionally durable, and able to be remade and recycled at the end of their use.”
The current production models work one way: create new products with none of the resources returning for recycling.
Fashion is one of the world’s biggest polluters and its demands on resources such as water and fuel grow as fast fashion rules developed countries. If the fashion industry adopts more environmentally concerned practices, it could become as en vogue as past trends that should never have been popular: parachute pants, bicycle shorts, and powdered wigs.
Whitney Grace, December 3, 2021
What Could Possibly Go Wrong: Direct Connections to MSFT SQL Servers?
December 3, 2021
One can now connect Google Data Studio directly to MSSQL servers with a new beta version. Previously, this feat required the use of either Microsoft’s Power BI or Big Query. Reporter Christian Lauer over at CodeX frames this move as an incursion in, “Google Attacks Microsoft Power BI.” He writes:
“Maybe many of you have been waiting for this. Google Data Studio now also offers a connector to MSSQL servers — at least in the beta version. But you can already use it without any problems. For me this is a milestone and a direct attack on Microsoft Power BI. Because now Data Studio is again a bit closer to the top solutions like Power BI and Qlik or Tableau. In addition, you no longer have to use MS products or load the data into a data warehouse beforehand if they come from Microsoft servers. The advantage for Google’s solution is of course that Data Studio is free of charge. … Many companies have MSSQL databases, now the widely used and free Data Studio from Google also offers a built in connector for it. Often the right and better way would be to make the data available via a Data Warehouse or Data Lake. But especially for smaller companies with only a handful of MSSQL databases, this direct way via Data Studio is probably the most efficient.”
The write-up describes the straightforward process for connecting to an MSSQL database via Google Data Studio, complete with a screenshot. For more information, he sends us to Studio’s Help file, “How to Connect to Microsoft SQL Server.” We wonder, though, whether Microsoft would agree this development amounts to an “attack.” The company may barely notice the change. Cyber criminals? We will have to wait and see.
Cynthia Murrell, December 3, 2021