Instagram: What Does Suspicious Mean at This Facebook Outfit?

August 19, 2020

DarkCyber noted what could be construed as a baby step toward adulting or a much bigger step toward Facebook obtaining more fine-grained information. “Instagram Will Make Suspicious Accounts Verify Their Identity” states:

Instagram is taking new steps to root out bots and other accounts trying to manipulate its platform. The company says it will start asking some users to verify their identities if it suspects “potential inauthentic behavior.” Instagram stresses that the new policy won’t affect most users, but that it will target accounts that seem suspicious.

It seems that “inauthentic” means “suspicious.” Okay, what is that exactly. The write up quotes an Instagram  something as saying:

This includes accounts potentially engaged in coordinated inauthentic behavior, or when we see the majority of someone’s followers are in a different country to their location, or if we find signs of automation, such as bot accounts.

What addresses inauthenticity? How about this?

Under the new rules, these accounts will be asked to verify their identity by submitting a government ID. If they don’t, the company may down-rank their posts in Instagram’s feed or disable their account entirely.

When a moment of adulting or a data grab, the Facebook continues to be Facebook.

Stephen E Arnold, August 19, 2020

Data Federation: K2View Seizes Lance, Mounts Horse, and Sallies Forth

August 13, 2020

DarkCyber noted “K2View Raises $28 million to Automate Enterprise Data Unification.”

Here’s the write up’s explanation of the K2View:

K2View’s “micro-database” Fabric technology connects virtually to sources (e.g., internet of things devices, big data warehouses and data lakes, web services, and cloud apps) to organize data around segments like customers, stores, transactions, and products while storing it in secure servers and exposing it to devices, apps, and services. A graphical interface and auto-discovery feature facilitate the creation of two-way connections between app data sources and databases via microservices, or loosely coupled software systems. K2View says it leverages in-memory technology to perform transformations and continually keep target databases up to date.

The write up contains a block diagram:

image

Observations:

  1. It is difficult to determine how much manual (human) work will be required to deal with content objects not recognized by the K2View system
  2. What happens if the Internet connection to a data source goes down?
  3. What is the fall back when a microservice is not available or removed from service?

Many organizations offer solutions to disparate types of data scattered across many systems. Perhaps K2View will slay the digital windmills of silos, different types of data, and unstable connections? Silos have been part of the data landscape as long as Don Quixote has been spearing windmills.

Stephen E Arnold, August 13, 2020

After 20 Plus Years, Whoa! Surveillance by Big Tech

August 10, 2020

DarkCyber has noted a flurry of write ups expressing surprise, rage, indignation, and blusterification at the idea of a commercial company collecting data. Hello, services are free for a basic reason: Making money. Part of making money is to have something that other companies and organizations will purchase. A good example is personal information about users of free services. The way big companies work is that there is a constant pressure to find new ways to generate money. Thus, there are data sucking apps; there are advertisements and more advertisements; there are subscriptions which lock in revenue while providing an Amazon-style we know a lot about those who shop on Amazon; and there are many ornaments on these methods.

I got a kick out of “Silicon Valley’s Vast Data Collection Should Worry You More Than TikTok.” We know the story well. Commercial firms in the US gather data and license it, often to marketing firms and to other organizations. After two decades of blissful ignorance a devoted band of “real” journalists are now probing the core business model of many technology centric companies.

Give me a break. We are talking decades of business processes designed to generate useful reports from flows of actions by individuals. In some countries, the government performs this task. In others, commercial enterprises do the work and license the normalized data to governments.

This passage from the write up tickled my funny bone:

And none of this is unreasonable. We should be worried about private companies and governments potentially collecting data on millions of unsuspecting people and censoring content they don’t like. But those based in China represent just a sliver of that threat.

Yep, the old “woulda, coulda, shoulda” ploy. May I remind you, gentle reader, that we are decades into the automation of data about the actions of individuals. These are the happy and often ignorant humanoids who download apps, run queries, click on videos, and send personal message while leaving a data trail a foot deep and a mile wide.

And now the need for something?

And data collection is not a technical and economic issue. Nope. Data collection is politics; for example:

TikTok’s critics might point to the increasingly scary behavior of China’s government as to why Chinese control of information is particularly alarming. They’re right about the behavior, but they curiously ignore the fact that the United States itself is currently governed by a far-right demagogue with his own concentration camps and authoritarian repression, and that the party behind him, which aligns entirely with his politics, reliably cycles into power at least once every eight years.

What’s the fix? Well, “oppose it all.”

Where were the regulators, the users, and the competitors 20 years ago? Probably in grade school, blissfully unaware that those handheld gadgets would become more important than other activities. Okay, adult thumbtypers, your outrage is interesting. Step back, and perhaps you can see why the howls of outrage, the references to evil forms of government, and the horrors of toting around a device that usually provides real time documentation of one’s actions as a bad thing.

But after 20 years, is it surprising that personal data actions are captured, analyzed, and used to provide more data “stuff” to consume? As I said, its been 20 years with no lessening of the processes. Complain to your parents. Maybe they dropped the ball? Commercial enterprises and governments are like beavers. And beavers do what beavers do.

Stephen E Arnold, August 10, 2020

Quantexa: A Better Way to Nail a Money Launderer?

July 29, 2020

We noted the Techcrunch article “Quantexa Raises $64.7M to Bring Big Data Intelligence to Risk Analysis and Investigations.” There were a number of interesting statements or factoids in the write up; for example:

Altogether, Quantexa has “thousands of users” across 70+ countries, it said, with additional large enterprises, including Standard Chartered, OFX and Dunn & Bradstreet.

We also circled in true blue marker this passage:

As an example, typically, an investigation needs to do significantly more than just track the activity of one individual or one shell company, and you need to seek out the most unlikely connections between a number of actions in order to build up an accurate picture. When you think about it, trying to identify, track, shut down and catch a large money launderer (a typical use case for Quantexa’s software) is a classic big data problem.

And lastly:

Marria [the founder] says that it has a few key differentiators from these. First is how its software works at scale: “It comes back to entity resolution that [calculations] can be done in real time and at batch,” he said. “And this is a platform, software that is easily deployed and configured at a much lower total cost of ownership. It is tech and that’s quite important in the current climate.”

Some “real time” systems require time consuming and often elaborate configuration to produce useful outputs. The buzzwords take precedence over the nuts and bolts of installing, herding data, and tuning the outputs of this type of system.

Worth monitoring how the company’s approach moves forward.

Stephen E Arnold, July 29, 2020

Oracle and Blockchain

July 28, 2020

Amidst the angst about US big technology companies, Rona, and Intel’s management floundering, Oracle blockchain is easy to overlook. “Oracle Updates Blockchain Platform Cloud Service.” The title alone invokes the image of Amazon’s blockchain platform and its associated moving parts.

The write up focuses on Oracle as if the Amazon and other options do not exist. But the parallels with Amazon’s blockchain services are clearly articulated. The article reports:

Blockchain Platform Cloud Service features stronger access controls for sharing confidential information, greater decentralization capabilities for blockchain consortiums, and stronger audibility when rich history database feature is used in conjunction with Oracle Database Blockchain Tables.

Even more Amazon envy seems to have influenced this “new” feature:

Oracle Cloud Infrastructure Availability Domains (and in the regions with a single Availability Domain, three Fault Domains) to provide stronger resilience and recoverability, with the SLA for the Enterprise SKUs of at least 99.95%.

The line up of services strikes me as having been developed after reading Amazon’s blockchain documentation; for example:

  • On demand storage
  • Spiffed up access controls
  • Workflow functions.

There is one difference, however. It appears that Oracle wants to tackle Amazon blockchain at a weak point: Price. Oracle is not likely to be significantly cheaper than AWS blockchain. Oracle wants to make its pricing more or less understandable to a prospect.

Will clarity allow Oracle to compete with Amazon blockchain?

After losing Amazon as a customer and watching the online book store pump out blockchain inventions for several years, Oracle hopes its approach will prevail or at least catch up with the Bezos bulldozer.

Stephen E Arnold, July 28, 2020

TileDB Developing a Solution to Database Headaches

July 27, 2020

Developers at TileDB are working on a solution to the many problems traditional and NoSQL databases create, and now they have secured more funding to help them complete their platform. The company’s blog reports, “TileDB Closes $15M Series A for Industry’s First Universal Data Engine.” The funding round is led by Two Bear Capital, whose managing partner will be joining TileDB’s board of directors. The company’s CEO, Stavros Papadopoulos, writes:

“The Series A financing comes after TileDB was chosen by customers who experienced two key pains: scalability for complex data and deployment. Whole-genome population data, single-cell gene data, spatio-temporal satellite imagery, and asset-trading data all share multi-dimensional structures that are poorly handled by monolithic databases, tables, and legacy file formats. Newer computational frameworks evolved to offer ‘pluggable storage’ but that forces another part of the stack to deal with data management. As a result, organizations waste resources on managing a sea of files and optimizing storage performance, tasks traditionally done by the database. Moreover, developers and data scientists are spending excessive time in data engineering and deployment, instead of actual analysis and collaboration. …

“We invented a database that focuses on universal storage and data management rather than the compute layer, which we’ve instead made ‘pluggable.’ We cleared the path for analytics professionals and data scientists by taking over the messiest parts of data management, such as optimized storage for all data types on numerous backends, data versioning, metadata, access control within or outside organizational boundaries, and logging.”

So with this tool, developers will be freed from tedious manual steps, leaving more time to innovate and draw conclusions from their complex data. TileDB has also developed APIs to facilitate integration with tools like Spark, Dask, MariaDB and PrestoDB, while TileDB Cloud enables easy, secure sharing and scalability. See the write-up for praise from excited customers-to-be, or check out the company’s website. Readers can also access the open-source TileDB Embedded storage engine on Github. Founded in 2017, TileDB is based in Cambridge, Massachusetts.

Cynthia Murrell, July 27, 2020

IHS Markit Data Lake “Catalog”

July 14, 2020

One of the DarkCyber research team spotted this product announcement from IHS, a diversified information company: “IHS Markit’s New Data Lake Delivers Over 1,000 Datsets in an Integrated Catalogued Platform.” The article states:

The cloud-based platform stores, catalogues, and governs access to structured and unstructured data. Data Lake solutions include access to over 1,000 proprietary data assets, which will be expanded over time, as well as a technology platform allowing clients to manage their own data. The IHS Markit Data Lake Catalogue offers robust search and exploration capabilities, accessed via a standardized taxonomy, across datasets from the financial services, transportation and energy sectors.

The idea is consistently organized information. Queries can run across the content to which the customer has access.

Similar services are available from other companies; for example, Oracle BlueKai.

One question which comes up is, “What exactly are the data on offer?” Another is, “How much does it cost to use the service?”

Let’s tackle the first question: Scope.

None of the aggregators make it easy to scan a list of datasets, click on an item, and get a useful synopsis of the content, content elements, number of items in the dataset, update frequency (annual, monthly, weekly, near real time), and the cost method applicable to a particular “standard” query.

A search of Bing and Google reveals the name of particular sets of data; for example, Carfax. However, getting answers to the scope question can require direct interaction with the company. Some aggregators operate in a similar manner.

The second question: Cost?

The answer to the cost question is a tricky one. The data aggregators have adopted a set or a cluster of pricing scenarios. It is up to the customer to look at the disclosed data and do some figuring. In DarkCyber’s experience, the data aggregators know much more about what content process, functions or operations generate the maximum profit for the vendor. The customer does not have this insight. Only through use of the system, analyzing the invoices, and paying them is it possible to get a grip on costs.

DarkCyber’s view is that data marketplaces are vulnerable to disruption. With a growing demand for a wide range of information some potential customers want answers before signing a contract and outputting big bucks.

Aggregators are a participant in what DarkCyber calls “professional publishing.” The key to this sector is mystery and a reluctance to spell out exact answers to important questions.

What company is poised to disrupt the data aggregation business? Is it the small scale specialist like the firms pursued relentlessly by “real” journalists seeking a story about violations of privacy? Is it a giant company casting about for a new source of revenue and, therefore, is easily overlooked. Aggregation is not exactly exciting for many people.

DarkCyber does not know. One thing seems highly likely: Professional publishing data aggregation sector is likely to face competitive pressure in the months ahead.

Some customers may be fed up with the secrecy and lack of clarity and entrepreneurs will spot the opportunity and move forward. Rich innovators will just buy the vendors and move in new directions.

Stephen E Arnold, July 14, 2020

The Myth of Data Federation: Not a New Problem, Not One Easily Solved

July 8, 2020

I read “A Plan to Make Police Data Open Source Started on Reddit.” The main point of this particular article is:

The Police Data Accessibility Project aims to request, download, clean, and standardize public records that right now are overly difficult to find.

Interesting, but I interpreted the Silicon Valley centric write up differently. If you are a marketer of systems which purport to normalize disparate types of data, aggregate them, federate indexes, and make the data accessible, analyzable, retrievable, and bang on dead simple — stop reading now. I don’t want to deal with squeals from vendors about their superior systems.

For the individual reading this sentence, a word of advice. Fasten your seat belt.

Some points to consider when reading the article cited above, listening to a Vimeo “insider” sales pitch, or just doing techno babble with your Spin class pals:

  1. Dealing with disparate data requires time and money as well as NOT ONE but multiple software tools.
  2. Even with a well resourced and technologically adept staff, exceptions require attention. A failure to deal with the stuff in the Exceptions folder can skew the outputs of some Fancy Dan analytic systems. Example: How about that Detroit facial recognition system? Nifty, eh?
  3. The flows of real time data are a big problem — are you ready for this — a challenge to the Facebooks, Googles, and Microsofts of the world. The reason is that the volume of data and CHANGES TO THOSE ALREADY PROCESSED ITEMS OF INFORMATION is a very, very tough problem. No, faster processors, bigger pipes, and zippy SSDs won’t do the job. The trouble lies within, the intradevice and intra software module flow. The fix is to sample, and sampling increases the risk of inaccuracies. Example: Remember Detroit’s facial recognition accuracy. The arrested individual may share some impressions with you.
  4. The baloney about “all” data or “any” type is crazy talk. When one deals with more than 18,000 police forces in the US, outputs from surveillance devices from different vendors, and the geodumps of individuals and their ad tracking beacons — this is going to be mashed up and made usable. Noble idea. There are many noble ideas.

Why am I taking the time to repeat what anyone with experience in large scale data normalization and analysis knows?

Baloney can be thinly sliced, smeared with gochujang, and served on Delft plates. Know what? Still baloney.

Gobble this:

Still, data is an important piece of understanding what law enforcement looks like in the US now, and what it could look like in the future. And making that information more accessible, and the stories people tell about policing more transparent, is a first step.

But the killer assumption is that the humans involved don’t make errors, systems remain online, and file formats are forever.

That baloney. It really is incredible. Just not what you think.

Stephen E Arnold, July 8, 2020

Content for Deep Learning: The Lionbridge View

March 17, 2020

Here is a handy resource. Lionbridge AI shares “The Best 25 Datasets for Natural Language Processing.” The list is designed as a starting point for those just delving into NLP. Writer Meiryum Ali begins:

“Natural language processing is a massive field of research. With so many areas to explore, it can sometimes be difficult to know where to begin – let alone start searching for data. With this in mind, we’ve combed the web to create the ultimate collection of free online datasets for NLP. Although it’s impossible to cover every field of interest, we’ve done our best to compile datasets for a broad range of NLP research areas, from sentiment analysis to audio and voice recognition projects. Use it as a starting point for your experiments, or check out our specialized collections of datasets if you already have a project in mind.”

The suggestions are divided by purpose. For use in sentiment analysis, Ali notes one needs to train machine learning models on large, specialized datasets like the Multidomain Sentiment Analysis Dataset or the Stanford Sentiment Treebank. Some text datasets she suggests for natural language processing tasks like voice recognition or chatbots include 20 Newsgroups, the Reuters News Dataset, and Princeton University’s WordNet. Audio speech datasets that made the list include the audiobooks of LibriSpeech, the Spoken Wikipedia Corpora, and the Free Spoken Digit Dataset. The collection concludes with some more general-purpose datasets, like Amazon Reviews, the Blogger Corpus, the Gutenberg eBooks List, and a set of questions and answers from Jeopardy. See the write-up for more on each of these entries as well as the rest of Ali’s suggestions in each category.

This being a post from Lionbridge, an AI training data firm, it naturally concludes with an invitation to contact them when ready to move beyond these pre-made datasets to one customized for you. Based in Waltham, Massachusetts, the company was founded in 1996 and acquired by H.I.G. Capital in 2017.

Cynthia Murrell, March 17, 2020

LiveRamp: Data Aggregation Under the Marketing Umbrella

March 10, 2020

Editor’s Note: We posted a short item about Venntel. This sparked some email and phone calls from journalists wanting to know more about data aggregation. There are a number of large data aggregation companies. Many of these work with diverse partners. If the data aggregation companies do not sell directly to the US government, some of the partners of these firms might. One of the larger data aggregation companies positions itself as a specialist, a niche player. We have pulled some information from our files to illustrate what data aggregation, cross correlation, and identify resolution contributes to advertisers, political candidates, and other entities.

Introduction

LiveRamp is Acxiom, and it occupies a leadership position in resolving identity across data sets.  The system can be used by a company to generate revenue from its information. The company says:

We’re innovators, engineers, marketers, and data ethics experts on a mission to make data safe and easy to use.

LiveRamp also makes it easy to a company to obtain certain types of data and services which can be made more accurate via LiveRamp methods. The information is first, second, and third party data. First means the company captures the data directly. Second means the data come from a partner. Third means that, like distant cousins, there’s mostly a tenuous relationship among the source of the data, the creator of the data, the collector of the data, and the intermediary who provides the data to LiveRamp. There’s a 2016 how to at this link.

image

According to a former LiveRamp employee:

LiveRamp doesn’t actually provide intelligence on the data, it just moves the data around effectively, quickly, seamlessly, and accurately.

The basic mechanism was explained in “The Hidden Value of of Acxiom’s LiveRamp”:

An alternative approach is to designate a single company to be the hub of all ID syncs. The hub can collect IDs from each participating ad tech partner and then form mutual ID syncs as needed. Think of this as a match maker who knows the full universe of eligible singles and can then introduce couples. LiveRamp has established itself as this match maker…

This is ID syncing; that is, figuring out who is who or what is what via anonymized or incomplete data sets.

There’s nothing unusual in what LiveRamp does. Oracle and other firms perform onboarding Why? Data are hot mess. Hot means that government agencies, companies, digital currency providers, and non governmental organizations will license access to these data. The mess means that information is messy, incomplete, and inaccurate. Cross correlation can address some, but not all, of these characteristics.

The Business: License Access to Data

Think of LiveRamp as an old-school mailing list company. There’s a difference. LiveRamp drinks protein shakes, follows a keto diet, and makes full use of digital technology.

According the the company:

We have a unique philosophy and approach to onboarding [that’s the LiveRamp lingo for importing data]. It’s not just about bringing offline data online. It’s about bringing siloed first-, second-, and third-party data together in a privacy-conscious manner and then resolving it to a single persistent identifier called an IdentityLink.

DarkCyber is no expert in the business processes of LiveRamp. We can express some of these ideas in our own words.

Onboarding means importing. In order to import data, LiveRamp, a Fiverr worker, or smart software has to convert the source data to a format LiveRamp can import. There are other steps to make sure the data is consistent, fields exist, and are what the bringer of the data says they are; for example, the number of records matches what the data provider asserts.

Siloed data are data kept apart from other data. The reason for creating separate, often locked down sets of data separate from other data is for secrecy, licensing compliance, or business policies; for example, a pharma outfit developing a Covid 19 treatment does not want those data floating around anywhere except in a very narrow slice of the research facility. Once siloed data appear anywhere, DarkCyber becomes quite curious about the who, what, when, where, why, and the all important how. How answers the question, “How did the data escape the silo?”

Privacy conscious is a phrase that seems a bit like Facebook lingo. No comment or further explanation is needed from DarkCyber’s point of view.

IdentityLink is essentially an accession number to a profile. Law enforcement gives prisoners numbers and gathers data in a profile. LiveRamp does it for the entities its cross correlative methods facilitate. Once an individual profile exists, other numerical procedures can be applied to assign “values” or “classifications” to the entities; for example, sports fans or maybe millennial big spender. One may be able to “resolve identity” if a customer does not know “who” an entity is.

image

Cookie data are available. These are useful for a range of specialized functions; for example, trying to determine where an individual has “gone” on the Internet and related operations.

In a nutshell, this is the business of LiveRamp.

Open Source Contributions

LiveRamp has more than three dozen repositories in GitHub. Examples include:

  • Cascading_ext which allows LiveRamp customers to build, debug, and run simple data workflows.
  • HyperMinHash-java. Cross correlation by any other name still generates useful outputs.
  • Munkres. Optimization made semi-easy.
People

The LiveRamp CEO is Scott Howe, who used to work at Microsoft. LiveRamp purchased Data Plus Math, a firm specializing in analyzing targeted ads on traditional and streaming TV. Data Plus Math co-founders, CEO John Hoctor and Chief Technology Officer Matthew Emans, allegedly have work experience with Mr. Howe and Microsoft’s advertising unit.

Interesting Customers
  • Advertising agencies
  • Political campaigns
  • Ad inventory brokers.

Stephen E Arnold, March 10, 2020

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta