Verisign Outreach Program Remediates Billions of Name Collision Queries

A name collision occurs when a user attempts to resolve a domain in one namespace, but it unexpectedly resolves in a different namespace. Name collision issues in the public global Domain Name System (DNS) cause billions of unnecessary and potentially unsafe DNS queries every day. A targeted outreach program that Verisign started in March 2020 has remediated one billion queries per day to the A and J root name servers, via 46 collision strings. After contacting several national internet service providers (ISPs), the outreach effort grew to include large search engines, social media companies, networking equipment manufacturers, national CERTs, security trust groups, commercial DNS providers, and financial institutions.

While this unilateral outreach effort resulted in significant and successful name collision remediation, it is broader DNS community engagement, education, and participation that offers the potential to address many of the remaining name collision problems. Verisign hopes its successes will encourage participation by other organizations in similar positions in the DNS community.

Verisign is proud to be the operator for two of the world's 13 authoritative root servers. Being a root server operator carries with it many operational responsibilities. Ensuring the security, stability and resiliency of the DNS requires proactive efforts so that attacks against the root name servers do not disrupt DNS resolution, as well as the monitoring of DNS resolution patterns for misconfigurations, signaling telemetry, and unexpected or unintended uses that, without closer collaboration, could have unforeseen consequences (e.g. Chromium's impact on root DNS traffic).

Monitoring may require various forms of responsible disclosure or notification to the underlying parties. Further, monitoring the root server system poses logistical challenges because any outreach and remediation programs must work at internet scale, and because root operators have no direct relationship with many of the involved entities.

Despite these challenges, Verisign has conducted several successful internet-scale outreach efforts to address various issues we have observed in the DNS.

In response to the Internet Corporation for Assigned Names and Number (ICANN) proposal to mitigate name collision risks in 2013, Verisign conducted a focused study on the collision string .CBA. Our measurement study revealed evidence of a substantial internet-connected infrastructure in Japan that relied on the non-resolution of names that end in .CBA. Verisign informed the network operator, who subsequently reconfigured some of its internal systems, resulting in an immediate decline of queries for .CBA observed at A and J root servers.

Prior to the 2018 KSK rollover, several operators of DNSSEC-validating name servers appeared to be sending out-of-date RFC 8145 signals to root name servers. To ensure the KSK rollover did not disrupt internet name resolution functions for billions of end users, Verisign augmented ICANN's outreach effort and conducted a multi-faceted technical outreach program by contacting and working with The United States Computer Emergency Readiness Team (US-CERT) and other national CERTs, industry partners, various DNS operator groups and performing direct outreach to out-of-date signalers. The ultimate success of the KSK rollover was due in large part to outreach efforts by ICANN and Verisign.

In response to the ICANN Board's request in resolutions 2017.11.02.29 — 2017.11.02.31, the ICANN Security and Stability Advisory Committee (SSAC) was asked to conduct studies, and to present data and points of view on collision strings, including specific advice on three higher risk strings: .CORP, .HOME and .MAIL. While Verisign is actively engaged in this Name Collision Analysis Project (NCAP) developed by SSAC, we are also reviving and expanding our 2012 name collision outreach efforts.

Verisign's name collision outreach program is based on the guidance we provided in several recent peer-reviewed name collision publications, which highlighted various name collision vulnerabilities and examined the root causes of leaked queries and made remediation recommendations. Verisign's program uses A and J root name server traffic data to identify high-affinity strings related to particular networks, as well as high query volume strings that are contextually associated with device manufacturers, software, or platforms. We then attempt to contact the underlying parties and assist with remediation as appropriate.

While we partially rely on direct communication channel contact information, the key enabler of our outreach efforts has been Verisign's relationships with the broader collective DNS community. Verisign's active participation in various industry organizations within the ICANN and DNS communities, such as M3AAWG, FIRST, DNS-OARC, APWG, NANOG, RIPE NCC, APNIC, and IETF1, enables us to identify and communicate with a broad and diverse set of constituents. In many cases, participants operate infrastructure involved in name collisions. In others, they are able to put us in direct contact with the appropriate parties.

Through a combination of DNS traffic analysis and publicly accessible data, as well as the rolodexes of various industry partnerships, across 2020 we were able to achieve effective outreach to the anonymized entities listed in Table 1.

Table 1. Sample of outreach efforts performed by Verisign.OrganizationQueries per Day to A & JStatusNumber of Collision Strings (TLDs)Notes / Root Cause AnalysisSearch Engine650MFixed1 stringApplication not using FQDNsTelecommunications Provider250MFixedN/APrefetching bugeCommerce Provider150MFixed25 stringsApplication not using FQDNsNetworking Manufacturer70MPending3 stringsSuffix search listCloud Provider64MFixed15 stringsSuffix search listTelecommunications Provider60MFixed2 stringsRemediated through device vendorNetworking Manufacturer45MPending2 stringsSuffix search list problem in router/modem deviceFinancial Corporation35MFixed2 stringsTypo / misconfigurationSocial Media Company30MPending9 stringsApplication not using FQDNsISP20MFixed1 stringSuffix search list problem in router/modem deviceSoftware Provider20MPending50+ stringsAcknowledged but still investigatingISP5MPending1 stringAt time of writing, still investigating but confirmed it is a router/modem device

Many of the name collision problems encountered are the result of misconfigurations and not using fully qualified domain names. After operators deploy patches to their environments, as shown in Figure 1 below, Verisign often observes an immediate and dramatic traffic decrease at A and J root name servers. Although several networking equipment vendors and ISPs acknowledge their name collision problems, the development and deployment of firmware to a large userbase will take time.

Figure 1. Daily queries for two collision strings to A and J root servers during a nine month period of time.

Cumulatively, the operators who have deployed patches constitute a reduction of one billion queries per day to A and J root servers (roughly 3% of total traffic). Although root traffic is not evenly distributed among the 13 authoritative servers, we expect a similar impact at the other 11, resulting in a system-wide reduction of approximately 6.5 billion queries per day.

As the ICANN community prepares for Subsequent Procedures (the introduction of additional new TLDs) and the SSAC NCAP continues to work to answer the ICANN Board's questions, we encourage the community to participate in our efforts to address name collisions through active outreach efforts. We believe our efforts show how outreach can have significant impact to both parties and the broader community. Verisign is committed to addressing name collision problems and will continue executing the outreach program to help minimize the attack surface exposed by name collisions and to be a responsible and hygienic root operator.

For additional information about name collisions and how to properly manage private-use TLDs, please see visit ICANN's Name Collision Resource & Information website.

  1. The Messaging, Malware and Mobile Anti-Abuse Working Group (M3AAWG), Forum of Incident Response and Security Teams (FIRST), DNS Operations, Analysis, and Research Center (DNS-OARC), Anti-Phishing Working Group (APWG), North American Network Operators' Group (NANOG), Réseaux IP Européens Network Coordination Centre (RIPE NCC), Asia Pacific Network Information Centre (APNIC), Internet Engineering Task Force (IETF) 

Written by Matt Thomas, Distinguished Engineer at Verisign | 15-Jan-2021 23:29

ICANN 2021 NomCom Will Fill 9 Positions

As every year, at the end of ICANN's Annual General Meeting (AGM), the new Nominating Committee (NomCom) comes together to start its work. Due to the Corona pandemic, the circumstances were slightly different; however, the 2021 NomCom kicked-off end of 2020.

ICANN's Nominating Committee is charged with identifying, recruiting, and selecting nominees of the highest possible quality for key leadership positions at ICANN. The 2021 NomCom is seeking candidates for the following positions:

  • Three members of the ICANN Board of Directors
  • Three regional representatives to the At-Large Advisory Committee (ALAC) — (one each from Africa, Asia/Australia/Pacific Islands, and Latin America/Caribbean regions)
  • Two members of the Generic Names Supporting Organization (GNSO) Council
  • One member of the Country Code Names Supporting Organization (ccNSO) Council

The NomCom is an independent committee of 21 delegates, 15 of whom have voting privileges. The NomCom is designed to function independently from the Board, the Supporting Organizations, and Advisory Committees. The full cycle lasts one year and includes five main phases: preparation, recruitment of candidates, assessments of candidates, candidate selection, and NomCom reporting to the community.

The Corona pandemic will not make the job any easier, as it is inevitable that some, if not all, meetings will only be held virtually. That will make it challenging to raise awareness, get applications, and reach out to possible candidates.

The application phase will start by the end of January, and the call will be published on NomCom's website. From then on, applications can be submitted for an expected two months.

Written by Tobias Sattler, CTO / Board Member at united-domains | 15-Jan-2021 18:29

New Study by eco Alliance: Best Practices Show Future Potential for Green IT 2030

Are European data centres ready for the climate targets of the EU Green Deal, to strengthen climate and environmental protection through digitalisation? A new study with best practices in the field of energy-efficient data centres formulates technological development potentials as well as policy recommendations.

Digitalisation needs powerful digital infrastructures in the form of data centres, edge computing, and cloud services. The ecosystem of digital infrastructures requires energy for the transmission, storage and processing of data. European data centres form the backbone of digitalisation and are already among the most energy-efficient in the world. Nevertheless, further efficiency potentials can be exploited in the future, making even greater energy savings possible. Data centres are not in themselves CO2 producers — however, they depend on the energy mix available in the respective countries. In Germany in particular, an accelerated energy transition, which further optimises the energy mix available, can help to reduce CO2 emissions even more rapidly.

These are the central findings of the new study "Data Centres in Europe — Opportunities for Sustainable Digitalisation — Part II" published by the Alliance for the Strengthening of Digital Infrastructures in Germany, founded under the umbrella of eco — Association of the Internet Industry, and jointly developed with the Borderstep Institute, with the support of the Vodafone Institute.

The study also shows that a move away from autonomous and locally-operated IT infrastructure to efficient cloud use can save up to 20 percent of the energy previously required, significantly reduce a company's CO2 emissions. This is possible because of the optimised server utilisation of cloud providers and the resultant higher energy efficiency. The United States Data Center Energy Usage Report estimates that hyperscale cloud data centres require up to 80% less energy for infrastructure such as cooling and power supply than traditional data centres.

"We now have it in our hands to contribute to a climate-neutral Europe as set out in the EU Green Deal, with the help of powerful and efficient digital infrastructures," says Dr. Béla Waldhauser, spokesperson of the Alliance for the Strengthening of Digital Infrastructures in Germany.

Figure 1: Fields of Technology for Which Government Action is Considered Necessary

If politics now invests in research and funding for energy-efficient digital infrastructures, this, in turn, will have a positive effect on many other areas, such as resource-saving industrial and work processes or emission-reducing urban and transport planning. A prerequisite for this is the creation of a functioning digital ecosystem consisting of efficient data centres, broadband networks that have been rolled out to provide coverage across the board, a rapidly increasing number of 5G networks, and all of this utilising software that has been programmed to be highly energy-efficient.

Best practices: Future potential demonstrated in Germany, Portugal, Spain and Sweden

The study uses various best practice examples to show which locations are already saving large amounts of energy today using the innovative technologies and applications available on the market. These include data centres in Portugal, Spain, Sweden and Germany.

The greatest potential for increased energy efficiency in the area of data centres is offered above all by technologies in the field of data centre cooling and ventilation, especially with regard to waste heat recovery. The German data centre Eurotheum, for example, uses a water-based direct cooling system to use around 70 percent of its own waste heat to heat local offices and conference rooms as well as hotels and restaurants.

Figure 2: Delphi Survey – Assessment of the Potential for Reducing Greenhouse Gases in the Technology Fields

The Amazon Web Services (AWS) data centre in Seville, Spain, is considered a best practice example in the promotion and usage of renewable energy. Due to its high levels of solar irradiation, Spain is ideally suited for Photovoltaics (PV). AWS concluded a long-term electricity supply contract in Seville for a photovoltaic (PV) system with a capacity of 149 megawatts. AWS has set itself the goal of becoming climate-neutral by 2040 and of transitioning its energy supply to 80% renewable energy by 2024.

In addition, in 2015, Vodafone launched a multi-year program to optimise the energy efficiency of existing data centres and telecommunication sites in several EU Countries. In the best practice example from Portugal presented in the study, the power usage effectiveness (PUE) was reduced from 1.57 to the current 1.36, an excellent result given the relatively high outside air temperatures in Portugal and the structural condition of these buildings compared to modern, newly-built data centres. At the same time, the data traffic has significantly increased — by around 40% per year. From July 2021, electricity for all European sites will be supplied using renewable energy.

Inger Paus, Managing Director of the Vodafone Institute: "The study clearly shows that digital infrastructures such as energy-efficient data centres are a key element in the green transformation of our economy. If we want to achieve sustainable digitalisation in Europe, we must now invest sufficiently both in research into innovative technological approaches and in the promotion and development of energy-efficient digital infrastructures. Only in this way can we achieve the EU goal of reducing CO2 emissions from data centres by 100 percent by 2030."

CO2 emissions of European data centres have been declining for 5 years despite increasing processing power

The use cases also illustrate that a large processing capacity and a high CO2 savings potential are not mutually exclusive, but are two sides of the same coin. Dr. Béla Waldhauser: "Politicians are currently often critical of digital technologies and services in terms of their energy balance and climate impact. But this is a fallacy: Digitalisation not only keeps our economic and social life going but is also part of the solution to the climate crisis." This is why politics, business and science should now also work closely together to transfer these findings even more strongly to the European data centre industry.

The industry is already on the right track: While the demand for computing power has increased tenfold over the past 10 years as a result of the ongoing digitalisation of the economy and society, the energy consumption per gigabit of data centres is now 12 times lower than in 2010. Since 2015, CO2 emissions from European data centres throughout Europe have been falling, despite a massive increase in processing power. This trend is set to continue in the future.*

Accelerated energy transition can also massively reduce CO2 savings in Germany

At the same time, the alliance of important representatives of the data centre industry sees a significant need for improvement, especially for Germany, in order to implement the goals of the EU Green Deal.

"Of course, the best and most ambitious climate targets are useless unless they are realistic," Waldhauser continues. "Our industry fully supports the EU climate goals, but in order to fully implement the climate-neutral operation, the necessary basic conditions must be created as a first step. In addition to a sustainable ecosystem of digital infrastructures, we need a Digital Single Market that enables locations in Europe to meet the respective requirements on an equal footing and also under comparable conditions and needs. In concrete terms, an industrial electricity price for digital infrastructure providers is certainly the right goal for such a level playing field, and it will allow Europe to remain competitive in the face of international competition."

Download the Part I of the study:
Data Centres in Europe — Opportunities for Sustainable Digitalisation

Download the Part II of the study:
Data Centres in Europe — Opportunities for Sustainable Digitalisation

  1. * Despite a 24% increase in the energy consumption of European data centres (2015-2020), greenhouse gas emissions were reduced by 8% over the same period (cf. Borderstep 2020). | 15-Jan-2021 18:24

Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries

This is the third in a multi-part blog series on cryptography and the Domain Name System (DNS).

In my last post, I looked at what happens when a DNS query renders a "negative" response — i.e., when a domain name doesn't exist. I then examined two cryptographic approaches to handling negative responses: NSEC and NSEC3. In this post, I will examine a third approach, NSEC5, and a related concept that protects client information, tokenized queries.

The concepts I discuss below are topics we've studied in our long-term research program as we evaluate new technologies. They do not necessarily represent Verisign's plans or position on a new product or service. Concepts developed in our research program may be subject to U.S. and international patents and patent applications.


NSEC5 is a result of research by cryptographers at Boston University and the Weizmann Institute. In this approach, which is still in an experimental stage, the endpoints are the outputs of a verifiable random function (VRF), a cryptographic primitive that has been gaining interest in recent years. NSEC5 is documented in an Internet Draft (currently expired) and in several research papers.

A VRF is like a hash function but with two important differences:

  1. In addition to a message input, a VRF has a second input, a private key. (As in public-key cryptography, there's also a corresponding public key.) No one can compute the outputs without the private key, hence the "random."
  2. A VRF has two outputs: a token and a proof. (I've adopted the term "token" for alignment with the research that I describe next. NSEC5 itself simply uses "hash.") Anyone can check that the token is correct given the proof and the public key, hence the "verifiable."

So, it's not only hard for an adversary to reverse the VRF — which is also a property the hash function has — but it's also hard for the adversary to compute the VRF in the forward direction, thus preventing dictionary attacks. And yet a relying party can still confirm that the VRF output for a given input is correct, because of the proof.

How does this work in practice? As in NSEC and NSEC3, range statements are prepared in advance and signed with the zone signing key (ZSK). With NSEC5, however, the range endpoints are two consecutive tokens.

When a domain name doesn't exist, the name server applies the VRF to the domain name to obtain a token and a proof. The name sever then returns a range statement where the token falls within the range, as well as the proof, as shown in the figure below. Note that the token values are for illustration only.

Figure 1. An example of a NSEC5 proof of non-existence based on a verifiable random function.

Because the range statement reveals only tokenized versions of other domain names in a zone, an adversary who doesn't know the private key doesn't learn any new existing domain names from the response. Indeed, to find out which domain name corresponds to one of the tokenized endpoints, the adversary would need access to the VRF itself to see if a candidate domain name has a matching hash value, which would involve an online dictionary attack. This significantly reduces disclosure risk.

The name server needs a copy of the zone's NSEC5 private key so that it can generate proofs for non-existent domain names. The ZSK itself can stay in the provisioning system. As the designers of NSEC5 have pointed out, if the NSEC5 private key does happen to be compromised, this only makes it possible to do a dictionary attack offline — not to generate signatures on new range statements, or on new positive responses.

NSEC5 is interesting from a cryptographer's perspective because it uses a less common cryptographic technique, a VRF, to achieve a design goal that was at best partially met by previous approaches. As with other new technologies, DNS operators will need to consider whether NSEC5's benefits are sufficient to justify its cost and complexity. Verisign doesn't have any plans to implement NSEC5, as we consider NSEC and NSEC3 adequate for the name servers we currently operate. However, we will continue to track NSEC5 and related developments as part of our long-term research program.

Tokenized Queries

A few years before NSEC5 was published, Verisign Labs had started some research on an opposite application of tokenization to the DNS, to protect a client's information from disclosure.

In our approach, instead of asking the resolver "What is 's IP address," the client would ask "What is token 3141...'s IP address," where 3141… is the tokenization of .

(More precisely, the client would specify both the token and the parent zone that the token relates to, e.g., the TLD of the domain name. Only the portion of the domain name below the parent would be obscured, just as in NSEC5. I've omitted the zone information for simplicity in this discussion.)

Suppose now that the domain name corresponding to token 3141… does exist. Then the resolver would respond with the domain name's IP address as usual, as shown in the next figure.

Figure 2. Tokenized queries.

In this case, the resolver would know that the domain name associated with the token does exist, because it would have a mapping between the token and the DNS record, i.e., the IP address. Thus, the resolver would effectively "know" the domain name as well for practical purposes. (We've developed another approach that can protect both the domain name and the DNS record from disclosure to the resolver in this case, but that's perhaps a topic for another post.)

Now, consider a domain name that doesn't exist and suppose that its token is 2718… .

In this case, the resolver would respond that the domain name doesn't exist, as usual, as shown below.

Figure 3. Non-existence with tokenized queries.

But because the domain name is tokenized and no other information about the domain name is returned, the resolver would only learn the token 2718… (and the parent zone), not the actual domain name that the client is interested in.

The resolver could potentially know that the name doesn't exist via a range statement from the parent zone, as in NSEC5.

How does the client tokenize the domain name, if it doesn't have the private key for the VRF? The name server would offer a public interface to the tokenization function. This can be done in what cryptographers call an "oblivious" VRF protocol, where the name server doesn't see the actual domain name during the protocol, yet the client still gets the token.

To keep the resolver itself from using this interface to do an online dictionary attack that matches candidate domain names with tokens, the name server could rate-limit access, or restrict it only to authorized requesters.

Additional details on this technology may be found in U.S. Patent 9,202,079B2, entitled "Privacy preserving data querying," and related patents.

It's interesting from a cryptographer's perspective that there's a way for a client to find out whether a DNS record exists, without necessarily revealing the domain name of interest. However, as before, the benefits of this new technology will be weighed against its operational cost and complexity and compared to other approaches. Because this technique focuses on client-to-resolver interactions, it's already one step removed from the name servers that Verisign currently operates, so it is not as relevant to our business today in a way it might have been when we started the research. This one will stay under our long-term tracking as well.


The examples I've shared in these last two blog posts make it clear that cryptography has the potential to bring interesting new capabilities to the DNS. While the particular examples I've shared here do not meet the criteria for our product roadmap, researching advances in cryptography and other techniques remains important because new events can sometimes change the calculus. That point will become even more evident in my next post, where I'll consider the kinds of cryptography that may be needed in the event that one or more of today's algorithms is compromised, possibly through the introduction of a quantum computer.

Read the previous posts in this six-part blog series:

  1. The Domain Name System: A Cryptographer's Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3

Written by Dr. Burt Kaliski Jr., Senior VP and Chief Technology Officer at Verisign | 15-Jan-2021 00:40

Brand Protection Beyond the "Whack-a-Mole" Approach

I recently shared at a conference how a seasoned brand and fraud expert from one of the world's largest global financial institutions lamented a major attack where multiple fraudulent websites would pop up every single day. All attacks were launched from the same registrar and web hosting company, and no matter how much they reached out to these providers, they received the same reply: "we will pass on your request to the registrant or site owner," and then nothing happened. The brand and fraud specialist felt like he was playing whack-a-mole — IT WAS NEVER ENDING — and he wondered why the registrar and web host were not getting in trouble for harboring the criminal, and why there was nothing he could do.

The answer could lie in the approach taken for online brand protection and whether a company is contributing to stopping the whack-a-mole game. Traditionally, most companies employ ongoing online brand monitoring, then enforce on it. But it doesn't change the fact that this will never fundamentally change the game — the endless cycle of detection and enforcement.

In recent years, some brand owners have started doing things a little bit differently. They have started to cooperate directly with platforms, and some also conduct online-offline joint operations. While these are extremely good measures — we also encourage our clients to establish direct communication with the platforms — this may still be inadequate because the world is changing.

1. Proliferation of eCommerce during COVID-19

  • Lockdowns and social distancing guidelines have forced people to buy online in most countries. According to recent statistics, U.S. eCommerce revenue has grown by 110%, EU 69%, APAC 45%, and the rest of the world 200% YoY.
  • As the number of eCommerce platforms grow, it will be harder for brand owners to create and nurture meaningful cooperation with every platform in direct enforcement operations or programs.
  • Smaller emerging boutique eCommerce sites may not have the resources or experience to implement effective programs to protect brand owners.
  • Aside from counterfeiting issues where products are concerned, brands hold a lot of customer data. Phishing and cybersecurity breaches impact a brand's revenue and reputation and should be a concern for brands as well.

2. Deglobalisation and shifts in supply chains

  • During the pandemic, we've noticed more nations drawing boundaries and imposing internet and data privacy laws. More countries are safeguarding their national interests, protecting local supply and exports, supporting local industries, etc. This deglobalization of the world will fragment the internet. It's also reshaping the global supply chain and localizing brand infringement.
  • A lot of brand protection resources are currently focused on Mainland China, but if supply chains shift to Latin America and Southeast Asia, brand protection managers may need to rethink their strategy.

3. Growing ideology conflict

  • The EU's General Data Protection Regulation (GDPR) has caused most domain WHOIS records to be redacted, significantly reducing the ability to conduct online enforcement. The WHOIS redaction debate doesn't happen in the European parliament, but at the Internet Corporation of Assigned Names and Numbers (ICANN) — the organization responsible for coordinating the Internet ecosystem — through a process called Expedited Policy Development Process (EPDP).
  • On the one hand, human rights activists who are typically very vocal, and some governments, want to redact everything. On the other hand, law enforcement and some other government bodies wanting some disclosure. But the pro-redaction camp is winning because in the ICANN world, you also have registries, registrars, and the hosting providers — none of whom want any disclosure. A registrar has even stopped collecting any information at all.
  • But what is the sentiment of the business and IP communities, and is their voice heard where policies are made? Brand owners often ask:
    • Who is the infringer?
    • Can I get the information to prosecute?
    • How can I get the registrar to take action?
    If companies need to find out who the infringer is, get information, or even find a better way to get a registrar to take action, then they need to start paying attention to the internet policies that impact their brand protection strategies.

There are numerous internet policies that are critical in determining the success of a brand protection manager.

Take the Digital Millennium Copyright Act (DMCA), for example. It established that "online service providers" are not accountable for infringements using its service (if certain conditions are met, i.e., safe harbor). As a result, while many registrars claim that they have no access or control over the content, therefore, they're not obliged to take action, many ISPs will simply reply that they have passed on the complaint message, as they are not held liable. However, some newer copyright regulations, such as the EU Digital Single Market copyright directive, and some new laws in China, may mitigate the issue of platforms not being held accountable.

Some internet policies have a global reach, such as the Rights Protection Mechanisms currently in revision at ICANN. Some policies are local in nature, such as the UK IP Protection Pilot Program that allows providers such as CSC to use a different method for infringement takedowns.

Some internet policies are not intended to be internet policies but can impact and change the landscape of how the online world works. For example, China's Anti-Monopoly Rule may allow boutique eCommerce platforms to thrive in China, which in turn will change how you should conduct online brand protection.

It's important the business community acts together to influence the development of these policies at various levels.

In conclusion, I have three recommendations for brand owners:

  1. Continue to do the basics — monitoring, enforcement, and developing platform relationships involving three-way partnerships among brand stakeholders), brand protection provider (as the workload is going to be heavier with more emerging platforms), and platforms.
  2. Start paying attention and play an active role in internet policy development; there are numerous forums for enterprise engagement.
  3. Think security and think of brand protection beyond just anti-counterfeit. Data is king; brand protection also means anti-fraud, anti-phishing, protecting the brand on social media, app store, and stand-alone websites.

Written by Alban Kwan, East Asia Regional Director at CSC | 14-Jan-2021 22:36

All Roads Lead to… Domains: Why the Humble Domain Name is the Foundation of Your Online Security

For most people, a domain is just an address that you type into a browser, but for businesses, domain names are the foundation of their online presence. A recent article says, "When it comes to operating a business online, the domain name is the center of everything. The domain name should ensure a frictionless and painless experience for the company, its customers, its partners and suppliers, and its employees."

In this blog, we'll explore why the humble domain name means so much more with regards to online security, and why good domain security and portfolio management are essential to an organization's online presence — and what happens without it.

Setting the scene

Imagine the scenario. You log into your computer and go to your company website. It's down, returning an error message. You open your emails to send a message to IT to let them know. It opens, but you can't connect, and can neither send nor receive emails. You open your company's softphone application to call IT. That's not working either. There's been a security breach; cyber criminals have targeted your organization and now all of your communications tools aren't working, as well as your website. How did they manage to infiltrate your systems?

Here's another scenario. You work for Toyworld, a manufacturer of children's toys. Counterfeiters are selling fake versions of your products on bogus websites that look and feel like your website, but offer the goods at vastly discounted prices. It's not only an intellectual property infringement, it's also directing traffic away from your website, causing you to lose revenue, and it's reported that some of the fakes that are battery powered are catching on fire due to shoddy electrics, putting consumers' health and safety at risk and damaging your reputation. You need to take down these websites as quickly and effectively as possible, but where do you start?

Final one; you work for a financial institution and your boss sends you an urgent email asking you to set up a new supplier and pay them as soon as possible. She'll be in meetings until late, she says, and it needs to be done by the end of the day; it's 4:50 p.m. What you don't know is that the email is not from your boss, it's actually from a phisher trying to extort money from the company. How did the phisher manage to send an email to you posing as your boss?

The answer to all three of these questions is domains.

In the first example, cyber criminals can use domain name or domain name system (DNS) hijacking to take down or redirect websites — or bring down email, virtual private networks (VPNs), and voice over IP (VoIP) — putting that business in jeopardy of revenue or brand reputation loss. When cyber criminals penetrate your domain name or DNS, they can then use phishing techniques to harvest credentials and ultimately breach your network. Such breaches expose personal information and can leave your organization vulnerable to significant financial penalties due to policies like the General Data Protection Regulation (GDPR). All of this can happen through the compromise of a single domain name, making domains a high-risk vulnerability.

In scenario two, the crux of the issue lies with the existence of the websites in the first place — and to create a website, you must register a domain. The counterfeiter buys a domain that includes your company's brand name (e.g.,, likely at low cost from a retail-grade registrar, and sets up their website with your organization's branding. With a domain monitoring and takedown service, you can cluster abusive sites owned by the same registrants and get them taken down in bulk.

Finally, phishing attacks of all kinds — not just a business email compromise (BEC) scam like the example mentioned — start with, yes you've guessed it, a domain name. Phishers buy a domain that is usually only one letter different to a genuine domain, or in some cases, they pick up domains that have been accidentally lapsed, and then use social engineering techniques to trick their target into sharing personal details, downloading malware (which can then compromise your DNS), or paying money.

So as you can see, it only takes one slip-up to bring down your company's online presence or put consumers and staff at risk when it comes to domains. That's why at CSC, we advocate comprehensive domain security and portfolio management, brand protection, and fraud protection solutions.

Protect the king

Securing domains is like protecting the king in a game of chess — once the king falls, the game is over. The other pieces in the chess set are your means of protecting him; with domains, these pieces are the key security protocols that you can put in place to protect your domains:

  • DNS security extensions (DNSSEC) — validates each step of the domain look-up process, preventing DNS spoofing, SAD DNS attacks, and cache poisoning.
  • Registry lock — prevents the making of any unauthorized changes to your domains at the registry level.
  • Domain-based message authentication, reporting, and conformance (DMARC), sender policy framework (SPF) and DomainKeys identified mail (DKIM) — email authentication protocols to ensure that any emails received are coming from where or who they say they are, preventing phishing attacks like BEC, spear phishing, and whaling.
  • Digital certificates and certificate authority authorization (CAA) records — digital certificates ensure a secure environment for your customers to visit or purchase things from your official website. CAA records make sure that bad actors can't issue your certificates with an unapproved certificate authority.

Finally, working with a single enterprise-class provider — which not only offers all of the above security protocols, but also has domain, brand protection, and fraud protection offerings under one roof — means that your whole online presence will be the most secure it can be.

We're ready to talkIf you'd like to find out more about any of the issues in this blog, visit our domain security page or complete our contact form. | 14-Jan-2021 19:27

More Warning Shots for ICANN, or the End of the Road?

Last fall, I wrote about ICANN's failed effort to achieve its goal of preserving the Whois domain name registration directory to the fullest extent possible. I predicted that if the policy effort failed, governments would take up the legislative pen in order to fulfill the long-ignored needs of those combating domain name system harms. That forecast has now come true through significant regulatory actions in the United States and the European Union in the form of a proposed directive from the European Commission (EC) and instruction from the US Congress to the National Telecommunications and Information Administration (NTIA).

ICANN Org now faces a stark choice: recoil and be a standby witness to what unfolds, or recognize that these further shots across its bow require it to boldly act. This means replacing the weak expedited policy development process (EPDP) team proposals and related implementation with robust requirements that track the EU's proposed 2.0 version of its Directive on Security of Network and Information Systems ("NIS2 Directive"), redirecting community efforts toward a centralized global access model for Whois that so many have been asking ICANN to develop, and revamping the accuracy requirements for Whois.

The alternative is that ICANN will find itself in the back seat in terms of who really gets to make Whois policy.

Regulatory Action in the European Union Requires ICANN to Revamp its Whois Policies

The developments have come quickly on both sides of the Atlantic.

Starting in Europe, the EC, following a re-examination of critical components of the General Data Protection Regulation (GDPR), now demands continued public access to Whois through a portion of the proposed NIS2 Directive. Specifically, the NIS2 Directive confirms the validity of the Whois database for legitimate purposes, ensures the ongoing collection of data, and mandates its accuracy.

The proposed directive further contains a very detailed set of instructions that deal almost exclusively with the areas of ICANN policymaking failure. In fact, it demands action in the areas all but ignored by the EPDP team output but flagged by the broader ICANN community as woefully inadequate. Specifically:

  • Ongoing collection of data by registries (such as .com and .net) and registrars;
  • Preventing inaccurate records;
  • Distinction between legal and natural persons; and
  • Efficient provision of data for legitimate requests (including service level agreements).

The directive prescribes, in particular, that registries and registrars publish non-personal registration data and provide expeditious access for legitimate purposes.

It's clear that these legislative proposals are intended to resolve the problems created by misapplication of the GDPR by the ICANN community.

US Authorities Recognize the Inadequacy of ICANN's WHOIS Proposals.

In the United States, end-of-year congressional action brought similar emphasis on Whois.

Specifically, as part of a governmental funding bill, US lawmakers set their sights on fixing the Whois issue, at least in their jurisdiction. Providing reasoning for their requests in a joint explanatory statement, members of Congress tell the NTIA (which sends the US representative to ICANN's Governmental Advisory Committee) how they expect them to act in exchange for departmental funding — namely, NTIA is directed to work with the GAC to expedite a Whois access model, and is encouraged to require US-based registries and registrars to collect and make public accurate registration data.

ICANN observer Greg Thomas, in a recent blog posting, reinforces the importance and possible impact of this congressional language, writing:

With this report language, Congress is clearly signaling that it is running out of patience with the lack of a mechanism for law enforcement, IP owners and others needing access to registrant identifier information for legitimate purposes such as criminal investigations and protecting rights online.

Even the author of ICANN's blog post, compliance chief Jamie Hedlund, acknowledges that Congress may look to more aggressive measures if the community can't produce more effectively than it has. Lack of a credible access model from ICANN means that NTIA will have a hard time defending the ICANN model before Congress when it's time to decide who ultimately makes domain name policy.

Thus far, ICANN Org has not yet taken this move from Congress as a positive and empowering call to action but has instead made an attempt to explain away at least part of this request, saying that the word encouraged is aspirational and not a mandate in terms of what might be required of registries and registrars. It's wishful thinking on ICANN's part. However, ICANN Org would be wise not to bank on semantics in the face of growing governmental frustration from both the US and Europe, which may lead to even stricter regulatory requirements should ICANN ignore these warnings.

A Course Correction Is Needed to Prevent Additional Regulatory Action

ICANN and its policymaking apparatus very much need a course correction on the issue of Whois. "Sooner or later" seems to be finally here, as the warning shots are beginning to look increasingly like governments taking up pen in very specific ways that will direct Whois policy.

This leaves the ICANN Board with no option other than to clearly reject the currently proposed access model — it's wholly insufficient, anyway — and direct ICANN Org to cease implementation on EPDP team recommendations while it better understands the potential impact of these EC and US Congressional developments. Doing otherwise is to blindly careen down paths that likely lead to conflict with US and EC directives on Whois, and further stretches an already stressed and exhausted ICANN community.

Written by Fabricio Vayra, Partner at Perkins Coie LLP | 14-Jan-2021 14:57

Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3

This is the second in a multi-part blog series on cryptography and the Domain Name System (DNS).

In my previous post, I described the first broad scale deployment of cryptography in the DNS, known as the Domain Name System Security Extensions (DNSSEC). I described how a name server can enable a requester to validate the correctness of a "positive" response to a query — when a queried domain name exists — by adding a digital signature to the DNS response returned.

The designers of DNSSEC, as well as academic researchers, have separately considered the answer of "negative" responses — when the domain name doesn't exist. In this case, as I'll explain, responding with a signed "does not exist" is not the best design. This makes the non-existence case interesting from a cryptographer's perspective as well.

Initial Attempts

Consider a domain name like that doesn't exist.

If it did exist, then as I described in my previous post, the second-level domain (SLD) server for would return a response signed by's zone signing key (ZSK).

So a first try for the case that the domain name doesn't exist is for the SLD server to return the response " doesn't exist," signed by's ZSK.

However, if doesn't exist, then won't have either an SLD server or a ZSK to sign with. So, this approach won't work.

A second try is for the parent name server — the .arpa top-level domain (TLD) server in the example — to return the response " doesn't exist," signed by the parent's ZSK.

This could work if the .arpa DNS server knows the ZSK for .arpa. However, for security and performance reasons, the design preference for DNSSEC has been to keep private keys offline, within the zone's provisioning system.

The provisioning system can precompute statements about domain names that do exist — but not about every possible individual domain name that doesn't exist. So, this won't work either, at least not for the servers that keep their private keys offline.

The third try is the design that DNSSEC settled on. The parent name server returns a "range statement," previously signed with the ZSK, that states that there are no domain names in an ordered sequence between two "endpoints" where the endpoints depend on domain names that do exist. The range statements can therefore be signed offline, and yet the name server can still choose an appropriate signed response to return, based on the (non-existent) domain name in the query.

The DNS community has considered several approaches to constructing range statements, and they have varying cryptographic properties. Below I've described two such approaches. For simplicity, I've focused just on the basics in the discussion that follows. The astute reader will recognize that there are many more details involved both in the specification and the implementation of these techniques.


The first approach, called NSEC, involved no additional cryptography beyond the DNSSEC signature on the range statement. In NSEC, the endpoints are actual domain names that exist. NSEC stands for "Next Secure," referring to the fact that the second endpoint in the range is the "next" existing domain name following the first endpoint.

The NSEC resource record is documented in one of the original DNSSEC specifications, RFC4033, which was co-authored by Verisign.

The .arpa zone implements NSEC. When the .arpa server receives the request "What is the IP address of," it returns the response "There are no names between and" This exchange is shown in the figure below and is analyzed in the associated DNSviz graph. (The response is accurate as of the writing of this post; it could be different in the future if names were added to or removed from the .arpa zone.)

NSEC has a side effect: responses immediately reveal unqueried domain names in the zone. Depending on the sensitivity of the zone, this may be undesirable from the perspective of the minimum disclosure principle.

Figure 1. An example of a NSEC proof of non-existence (as of the writing of this post).


A second approach, called NSEC3 reduces the disclosure risk somewhat by defining the endpoints as hashes of existing domain names. (NSEC3 is documented in RFC 5155, which was also co-authored by Verisign.)

An example of NSEC3 can be seen with, another domain that doesn't exist. Here, the .name TLD server returns a range statement that "There are no domain names with hashes between 5SU9… and 5T48...". Because the hash of is "5SVV..." the response implies that "" doesn't exist.

This statement is shown in the figure below and in another DNSviz graph. (As above, the actual response could change if the .name zone changes.)

Figure 2. An example of a NSEC3 proof of non-existence based on a hash function (as of the writing of this post).

To find out which domain name corresponds to one of the hashed endpoints, an adversary would have to do a trial-and-error or "dictionary" attack across multiple guesses of domain names, to see if any has a matching hash value. Such a search could be performed "offline," i.e., without further interaction with the name server, which is why the disclosure risk is only somewhat reduced.

NSEC and NSEC3 are mutually exclusive. Nearly all TLDs, including all TLDs operated by Verisign, implement NSEC3. In addition to .arpa, the root zone also implements NSEC.

In my next post, I'll describe NSEC5, an approach still in the experimental stage, that replaces the hash function in NSEC3 with a verifiable random function (VRF) to protect against offline dictionary attacks. I'll also share some research Verisign Labs has done on a complementary approach that helps protect a client's queries for non-existent domain names from disclosure.

Read the first post in this six-part blog series:
The Domain Name System: A Cryptographer’s Perspective

Written by Dr. Burt Kaliski Jr., Senior VP and Chief Technology Officer at Verisign | 13-Jan-2021 22:51

Can We Control the Digital Platforms?

The digital market has matured over the last 20 years, and it is no longer an excuse for governments to do nothing with the aim to let new markets and innovations emerge without immediate regulatory oversight.

It has become clear this period is now well and truly over. The European Commission has already launched several lawsuits against the digital giants. Regulation, in general, is known as "ex-post" (after the deed has been done). This is set to change, as I will explain later.

My colleague Scott Marcus from the economic think tank Bruegel based in Brussel participated in a very interesting discussion on this topic. I will tap into their information in my article.

The digital platforms that have emerged are successful because they are very big, indeed. They make their business sector different from others as the digital sector can grow very big quickly without requiring massive investments. Compare this, for example, with other global sectors such as the car or airline industry. Furthermore, the digital giants operate across traditional industry sectors, and as gatekeepers, they have a massive impact on the overall economy.

Just splitting the digital giants up, the European Union argues, would take away the universal services that are available over these platforms. Consumers would be confused and no longer interested if there were dozens and dozens of such services with the need to achieve a similar service outcome. It is like telecoms, electricity, and water — the fact that these services are ubiquitous makes them so successful.

As governments now have a good idea about the pros and cons of these platforms, it becomes possible to look at how to best regulate them to avoid the range of illegal and harmful activities that are being conducted over these platforms.

It is rather useless if all 200+ governments around the globe start issuing their own regulations. Australia's attempt to just address one aspect of it, getting the platforms to compensate the local press, is clearly not the best way forward. Firstly, it only addresses one small issue, and secondly, they are doing this as one of the 200 governments.

It makes far more sense to start looking at these digital giants more strategically and, at the same time, see if this can be done in a more unified way across nations.

The EU is trying to do this. They already took the lead in the General Data Protection Regulation (GDPR), which has now been adopted widely across the globe. This time, they have introduced two Acts aimed at stopping illegal activities over these platforms, requiring the giants to come up with measures to stop harmful content and at the same time to open the platforms so competition will be made possible on top of them. The European Commission has now proposed two legislative initiatives: the Digital Services Act (DSA) and the Digital Markets Act (DMA).

The DSA and DMA have two main goals:

  • to create a safer digital space in which the fundamental rights of all users of digital services are protected; and
  • to establish a level playing field to foster innovation, growth and competitiveness, both in the European Single Market and globally.

This is aimed at the platform gatekeepers and only at the large ones, which are measured by several parameters, a key one being that they provide services to at least ten percent of the European Union citizens (an average of 45 million users monthly).

While it looks certain that these Acts will indeed be put into law, this could easily take one or two years and will include one of the most serious lobbying activities ever seen in Europe — the stakes are enormous, and the giants are very powerful.

Key elements in the Acts are that they are ex-ante; they will have to be implemented beforehand instead of action being judged afterward. The reason for this way of regulating the gatekeepers is that as they wield great powers, they also must accept great responsibility, which means that they have obligations.

Because of the complexity and the proprietary nature of these platforms, governments have very little insight into the illegal activities taking place on them. For that purpose, the Acts will force the gatekeepers to be more transparent. In relation to harmful content, the Acts opt for co-regulation. They will ask the gatekeepers how they will address these issues and will require them to provide half-yearly or yearly reports on their progress.

On the competition side, the Acts requires the gatekeepers to open their platforms. The issues have been divided into a blacklist and a whitelist.

The blacklist requires them to:

  • not use data from businesses who use the platform to compete with them;
  • not self-preference their own services; and
  • not use their strong market position over their competitors.

On the whitelist side, gatekeepers should allow third parties to integrate their systems with their own ones and allow businesses to export data related to their own services from the platform. As competing with the giants currently is close to impossible, interoperability would allow for more competition over the platform.

A key element now will be to start a dialogue with the new U.S. Administration to come up with an overall policy. The platforms are a great addition to our society and our economy, but at the same time, we need far more transparency, interoperability and the ability to compete on top of these platforms. Will this undermine the economic viability of these large platforms; are they moving into utility territory?

I am looking forward to the discussions that are going to take place. If we combine our brainpower, we can surely come up with better outcomes. So, interesting times ahead. It is great to see Europe taking the lead in the thinking process behind this as they have nothing to lose, unlike the USA and increasingly China, where these platforms reside. This is a delicate situation with different national interests. Can industry and government come up with mature and good solutions? Yes, they can, but are we willing to do it?

Written by Paul Budde, Managing Director of Paul Budde Communication | 12-Jan-2021 20:17

Enriching Intrusion Detection and Prevention Systems with IP and Domain Intelligence

Intrusion detection systems (IDSs) and intrusion prevention systems (IPSs), collectively called "intrusion detection and prevention systems (IDPSs)," monitor network traffic to stave off unauthorized access. Roughly speaking, an IDS detects possible malicious network activities, while an IPS stops malicious traffic from entering and possibly damaging a network.

To successfully provide protection, IDPSs inspect and analyze each data packet. If necessary, the systems would then alert security administrators. Depending on how they are configured, IDPSs can stop an attack by dropping the malicious packet, resetting the connection, or blocking network traffic.

Like any other cybersecurity solution, IDPSs' effectiveness lies in the prompt and correct detection of possible malicious activities. IP and domain intelligence can provide additional data points for IDPSs to base their detection techniques.

IP- and Domain-Based Detection

One technique IDPSs use is to look for known exploits or activities that are similar or associated with an already-identified attack. This detection technique is signature-based since it looks for previously identified signatures or codes used by attackers.

However, attackers are not only known to reuse their codes, they also use the same IP and domain infrastructure on different targets. To illustrate, we obtained the top 10 most widely reported IP addresses on 5 January 2021 from AbuseIPDB. We then tabulated the number of unique reports and unique users for each IP address since the first time it was reported.

IP AddressNumber of Unique ReportsNumber of Unique Users45[.]155[.]205[.]869,97740245[.]155[.]205[.]879,665382221[.]181[.]185[.]13517,732341221[.]181[.]185[.]2917,661348221[.]181[.]185[.]13615,921336221[.]181[.]185[.]14313,768315221[.]181[.]185[.]1817,603346221[.]181[.]185[.]14813,780313221[.]181[.]185[.]1917,405341221[.]181[.]185[.]19917,335338

Since IDPSs inspect network packets, they could also examine the IP address within each packet and use IP intelligence sources to check for associations with malicious IP addresses. The IP addresses in the table above, for instance, belong to two IP netblocks according to IP Netblocks API. The first two IP addresses belong to IP netblock 45[.]155[.]205[.]0 — 45[.]155[.]205[.]255, while all the others belong to 221[.]181[.]184[.]0 — 221[.]181[.]191[.]255.

As such, IDPSs could be configured to analyze packets that contain IP addresses belonging to the IP netblocks associated with malicious activity.

What's more, an IP address found in the packet header could also be associated with malicious domains and should be blocked or, at the very least, reported to security administrators. One way to find out is to use Reverse IP/DNS Lookup. For instance, the IP address 156[.]254[.]105[.]3 may not raise any alert, as it hasn't been reported in blacklist sites, such as AbuseIPDB and VirusTotal.

However, Reverse IP/DNS Lookup revealed that it is associated with five domain names, including tisone360[.]com, which is related to the Darkhotel APT group. IDPSs could better protect networks by blocking packets containing such IP addresses.

Anomaly-Based Detection

Another technique most IDPSs use is anomaly detection, which aims to capture abnormal network activities. An additional criterion would be to look at the IP geolocation of the packet header. Is the source IP address located in a region the company has no dealings with? Or can it be traced to a high-risk location?

If the packet's IP geolocation lies in a region not previously seen in the network, the IDPS can alert security administrators so the packet can be further scrutinized. On the other hand, if the network activity is located in a region where cyber attackers abound, blocking the traffic may be wise.

Cybersecurity solutions, which include IDSs and IPSs, continue to evolve to adapt to the increasing sophistication of cyber attacks. Adding more sources, such as IP intelligence tools, can widen the scope of detection. | 12-Jan-2021 20:06

Digital in 2021 – Five Predictions for Brand Protection

While smartphones were an integral part of our lives before 2020, now, as a result of the changes associated with COVID, our mobile devices are virtually "super-glued" to our hands. The worldwide pandemic has heavily influenced our lives. Based on our past experiences with digital brand protection and the trends we're currently seeing, we've made five predictions regarding the future of internet usage in 2021.


1. 2021 will see faster adoption of digital communications and collaboration software at work and at home.

For almost everyone, text, video calls, and web conferences replaced in-person meetings, classes, and almost every formerly personal interaction.  Furthermore, the collaboration feature sets of Microsoft Office, Google Docs, Asana, and other productivity tools augmented by web meeting software, including Zoom, Google Meet, and Webex allow us to share creative processes and manage complex workflows. These feature sets are especially valuable in meeting the challenges posed by distributed workforces. As these technologies become even richer and more mobile accessible, we'll use them more because they'll be super-charged by high-speed connectivity enabled by 5G. 

2. Expect "late majority" adopters and laggards to accelerate their cloud investments in 2021.

The "work diaspora" is here to stay. With the largest tech companies allowing extended periods of WFH (work-from-home), and many other companies from all sectors offering or contemplating permanent opportunities for their workforce to work away from the office, the global workforce will continue to be distributed. Functional and technical elements of our computing infrastructure that have not yet moved to the cloud are quickly migrating there. Platforms ranging from data storage and artificial intelligence to security infrastructure have all taken root in the cloud, further propelling the growth of industry leaders Google, Amazon and other upstarts. And, with the speed, convenience and scalability of these platforms, companies can better manage growth, seasonality and other problems that were more difficult with self-managed dedicated hardware and software. 

3. Entertainment trends that started in 2020 will continue, including rapid mobile gaming growth, live events, including movie screenings and sports, will further rely on streaming revenue.

Whether you love or hate digital first-run blockbusters like the movie "Wonder Woman 84" or the videogame "Cyberpunk 2077", few of us can say that we haven't sought new entertainment options since the COVID pandemic shuttered theaters and other entertainment venues. With more powerful mobile phones, faster wireless and wired internet speeds, more digital and gaming content, most people are spending more time on their phones, on their laptops and in their living-rooms watching content and playing games. 

And while there are fewer and fewer live team sporting events, national and international sporting leagues and associations have found ways to keep their teams and fans safe while broadcasting their events online. For the gamers amongst us, Twitch and YouTube had already become popular places to watch pro's game and convene with other players. COVID has accelerated that adoption curve and we predict that we'll see even faster adoption in 2021 of mobile gaming and digital entertainment.

4. 2020 shopping innovations like curbside pickup and delivery options will be here to stay.

Before COVID, everyone dreaded a late-night visit to the market to pick up a quart of milk, or a visit to a crowded mall over the holidays to grab a last-minute gift.  And though in the past we've enjoyed the convenience of the brick-and-mortar outlets like Target, Best Buy, Walmart and our other favorite stores, we now have the option to capitalize on both the convenience of e-commerce and brick and mortar locations with options like curbside pickup and nearly instant home delivery from Postmates, Amazon and others. 

5. Cash apps and touchless payment have taken root; expect further financial app innovations like those that have been led by companies like Stripe, Square and Robinhood.

Rest assured, no one ever loved picking up an oft used pen to sign a credit card slip at a store or restaurant. The pandemic saw us adopting touchless checkout options like Apple and Google Pay, cash exchange apps like Venmo and Square Cash and other modes of payment. Now, with a press of a button, a wave of a phone, or a few sweeps on a screen, we can skip finding a credit card, writing checks, or digging up some dirty old bank notes. And with bank branch closures and limited visits to ATMs as a result of lockdowns, more and more people have adopted mobile banking apps and online banking, making it easier to manage money without human contact. 

Analysis and Implications

Everything Old Is New Again

With the accelerating adoption of wide-ranging mobile and digital technologies, there are now large communities of new and inexperienced users.  These new users, often elderly or very young, are not used to seeing and ignoring scams that target the naive. So, phishing scams, support scams, free offers and other scams all seem plausible to these users. In 2021 we will see the rehash of all the old scams — and more of them. 

"Idle hands are the Devil's workshop"

With millions out of work, and lumpy governmental support for people, small businesses and local governments, times will become more desperate, and there will be more bad actors targeting the new users mentioned above. As a result, losses due to fraud and deception will increase.

Organized Bad Actors Prevail

Organized bad actors, those that display a mastery of promoting fraud and deception in digital channels, will successfully fool new and old internet users. They'll grab attention using social, advertising and other means to drive traffic to web pages and websites where they can do the most harm. These organized perpetrators use multiple levels of obfuscation so brands without the appropriate level of technology-based intelligence will engage in one-off, whack-a-mole enforcement tactics. Meanwhile, these networks of bad actors will continue to bilk consumers out of money and personal information based on the trust earned by brands over years or decades. 

The business of working from home

With workers at home, sometimes distracted by their children, roommates, the news and other factors, bad actors will have a "field day." Combine a greater number of accessible systems and the vulnerabilities created by remote technologies with age-old and new techniques like spear phishing, business email compromise, malware, ransomware.  The result?  Bad actors will attack more companies through their remote workforce. 

Regulatory environment - a patchwork

With the global domain name system failing to abate abuse and, in fact, thwarting consumer protection, get ready for a patchwork of local laws targeting attribution and prosecution of bad actors. Add in expected new regulation on digital platforms that may reshape notice and takedown measures. Get ready for some confusion and turmoil in the world of notice and takedown related to local laws and regulations. 

What should brands do?

Aggressively monitor

Work cross-functionally with the product, commercial and marketing organizations at your company to understand the digital journey of your customers and aggressively monitor all digital channels for abuse targeting your customers' buying journey. 

Use advanced technologies to identify systemic abuse 

The latest technologies can help you find the bad actors who are most adept at using digital channels to attract your customers. Identifying systemic abuse will help you understand where you are most vulnerable and where you'll get the best "bang" for your brand protection "buck."

Prioritize organized actors

Focusing your brand protection efforts on organized networks of bad actors will yield the best return on investment. The trends listed earlier in this post mean that you are very likely to see increased abuse in 2021, but prioritizing the offenders who display mastery of digital channels will deliver meaningful results.

Use advanced attribution techniques

For the largest networks, disassemble how they work and who is behind them. Examining the source code for their web pages and apps, their privacy policies, underlying technologies and monetization methods will provide solid indicators of the identity of the perpetrators. 

Map networks of abuse, disassemble them and disable promotional and monetization modes

Map these abuse networks so you can identify the ones that are most complicated and use that intelligence to create a strategy to dismantle and disable them. Use the information about the network, its composition, who's behind it, the damage it causes your company and customers to take the network down from the root, no matter how deeply obscured or complex the network. 

Closing Thoughts

In 2021, we can be certain of two things:  The mobile and internet user attack surface has never been larger, showing no signs of shrinking, and bad actors are more agile and sophisticated in their methods than ever before. As a result, brands need to up their game.  Legacy brand protection methods will no longer suffice. New technology and cross-functional collaboration are crucial factors for abating these threats to business and our new lifestyle. 

Written by Frederick Felman, Chief Marketing Officer at AppDetex | 12-Jan-2021 19:51

Trump's Parting NTIA 5G Debacle

As Trump's horrific Administration of non-stop debacles and self-serving gambits headed toward the exit over the past few weeks, one last regulatory grab after another has been pushed out the door while the toddler-in-chief rants. Sure enough, the last of the 5G debacles just appeared in the Federal Register courtesy of the President's policy instrument, the National Telecommunication and Information Administration (NTIA). It was titled the 5G Challenge Notice of Inquiry.

The NOI proposes that U.S. 5G private sector resources be re-vectored from participating in long-existing global 5G standards bodies to help advance self-serving schemes cooked up by some of Trump's supporters now resident in the DOD before they depart. The "challenge" would have DOD in effect replace 3GPP and other open global standards organizations as the U.S. body for developing 5G standards. The Biden Administration should shut down this proceeding immediately.

At the outset of Trump assuming power, he and his minions sought to destroy anything and everything global and build walls around the nation. This included 5G telecommunication and information systems and the global marketplace. Highly successful and fully open global 5G industry bodies were painted as closed and biased against American interests. Impediments were placed in the way of U.S. private sector participation, xenophobic equipment bans were instituted, and Friends of Trump marshalled for what has been described as raiding the Federal funding and spectrum piggy banks. It is the successor to the cockamamie scheme to federalize the national 5G services proposed earlier.

Historically, the playbook of the Harding Administration a century ago was resurrected. A gopher for one of Trump's only Tech supporters was brought into the White House and appallingly named as the "U.S. CTO" to write Trump's pronouncements and then moved over to DOD in a senior position to pry open the piggy banks. Never mind their pronouncements made plain an utter lack of understanding of 5G.

The reality is that substantial U.S. private sector 5G resources exist and being demonstrably deployed today for nominally effective participation in existing fully open global 5G specifications bodies. The new Biden Administration has an opportunity to significantly enhance that participation to the benefit of America and the world. Trump's minions cooking up a scheme to re-vector those resources to line their own pockets as they turn in their badges is nothing less than reprehensible. The NTIA 5G Challenge NOI should be terminated immediately.

Written by Anthony Rutkowski, Principal, Netmagic Associates LLC | 12-Jan-2021 18:54

Are Big Tech CFOs (Inadvertently) Stealing From Shareholders?

When valuing a stock, analysts and shareholders evaluate always revenue and profit. Big tech COFs are sitting on assets worth tens of millions of dollars of annual profit (not just revenue, but true profit) in the form of unallocated IPv4 addresses. By not selling or leasing these out, they are incurring expenses to hold them and missing out on tremendous profits. At a 20X multiple (for context, Cisco is trading at nearly 18X earnings, Google at just over 33X earnings, Shopify at well over 700X earnings), big tech CFOs are actively preventing over $250 billion in market capitalization for their shareholders.

While these CFOs sit on sleeping (unallocated) IPv4 inventory, there is tremendous demand for those address blocks. So much so, that the federal government may step in. In February 2011, the last legacy blocks of IPv4 addresses were split among and distributed by Regional Internet Registries (RIRs). Many addresses were still available after this date, this threshold simply meant that supply was "officially" limited. Over the past 12 months, each of the five international registries responsible for allocating IP addresses to businesses has reported their stock is almost entirely depleted.

RIPE NCC, the European RIR, used a court order to seize IPv4 addresses from a bankrupt enterprise a few months ago. The US Congress considered language to direct the Department of Defense to sell off its unallocated IPv4 blocks. Think of what this means: the US government is watching international precedent for using litigation, legislation, and regulation to take IPv4 addresses from businesses. Simultaneously, they are working to sell off their own. I don't have a crystal ball, but this certainly looks like we are moving toward forcible seizure of sleeping IPv4 inventory from big tech — who are already missing out on hundreds of billions of dollars in market capitalization for shareholders by sitting on these inventories.

So what can we do? "We" must demand that big tech CFOs realize the profit of their IPv4 assets by selling them or leasing them on a secure exchange. I fear that if they don't take this simple step, not only will stockholders miss out on years of profit and correlating earnings multiples, those enterprises will ultimately be forced to give up their IPv4 inventory at a loss in the next 3-5 years. This isn't theory: we're actively watching a perfect storm of a global IPv4 shortage, new precedent to forcibly reintroduce IPv4 blocks into the open market, and federal government awareness of the issue.

Are you a financial decision-maker at a big tech organization? Do the right thing for your shareholders and internet users. We are relying on you to reintroduce your IPv4 addresses into the marketplace to create more sustainability as the internet evolves. Plus, imagine how happy your shareholders will be when you help them realize your portion of the more than $250 billion available?

Need some direction? Websites like IPXO will help you lease your IPv4 addresses to realize recurring revenue. If you'd rather sell and realize profits once, services like, HILCO, Apnic, Prefixx, and others can help. It's not labor-intensive or time-consuming. You won't have to hire anyone or spend resources training staff. In fact, it's probably the easiest way your enterprise will make tens of millions of dollars this year. I don't know that it's ever been easier and more profitable to do the right thing!

Written by Vincentas Grinius, CEO and Co-Founder at IPXO | 12-Jan-2021 18:37

The Domain Name System: A Cryptographer's Perspective

This is the first in a multi-part blog series on cryptography and the Domain Name System (DNS).

As one of the earliest protocols in the internet, the DNS emerged in an era in which today's global network was still an experiment. Security was not a primary consideration then, and the design of the DNS, like other parts of the internet of the day, did not have cryptography built in.

Today, cryptography is part of almost every protocol, including the DNS. And from a cryptographer's perspective, as I described in my talk at last year's International Cryptographic Module Conference (ICMC20), there's so much more to the story than just encryption.

Where It All Began: DNSSEC

The first broad-scale deployment of cryptography in the DNS was not for confidentiality but for data integrity, through the Domain Name System Security Extensions (DNSSEC), introduced in 2005.

The story begins with the usual occurrence that happens millions of times a second around the world: a client asks a DNS resolver a query like "What is's Internet Protocol (IP) address?" The resolver in this case answers: "'s IP address is". (This is the correct answer.)

If the resolver doesn't already know the answer to the request, then the process to find the answer goes something like this:

  • With qname minimization, when the resolver receives this request, it starts by asking a related question to one of the DNS's 13 root servers, such as the A and J root servers operated by Verisign: "Where is the name server for the .com top-level domain (TLD)?"
  • The root server refers the resolver to the .com TLD server.
  • The resolver asks the TLD server, "Where is the name server for the second-level domain (SLD)?"
  • The TLD server then refers the resolver to the server.
  • Finally, the resolver asks the SLD server, "What is's IP address?" and receives an answer: "".
Digital Signatures

But how does the resolver know that the answer it ultimately receives is correct? The process defined by DNSSEC follows the same "delegation" model from root to TLD to SLD as I've described above.

Indeed, DNSSEC provides a way for the resolver to check that the answer is correct by validating a chain of digital signatures, by examining digital signatures at each level of the DNS hierarchy (or technically, at each "zone" in the delegation process). These digital signatures are generated using public key cryptography, a well-understood process that involves encryption using key pairs, one public and one private.

In a typical DNSSEC deployment, there are two active public keys per zone: a Key Signing Key (KSK) public key and a Zone Signing Key (ZSK) public key. (The reason for having two keys is so that one key can be changed locally, without the other key being changed.)

The responses returned to the resolver include digital signatures generated by either the corresponding KSK private key or the corresponding ZSK private key.

Using mathematical operations, the resolver checks all the digital signatures it receives in association with a given query. If they are valid, the resolver returns the "Digital Signature Validated" indicator to the client that initiated the query.

Trust Chains

Figure 1: A Simplified View of the DNSSEC Chain.

A convenient way to visualize the collection of digital signatures is as a "trust chain" from a "trust anchor" to the DNS record of interest, as shown in the figure above. The chain includes "chain links" at each level of the DNS hierarchy. Here's how the "chain links" work:

The root KSK public key is the "trust anchor." This key is widely distributed in resolvers so that they can independently authenticate digital signatures on records in the root zone, and thus authenticate everything else in the chain.

The root zone chain links consist of three parts:

  1. The root KSK public key is published as a DNS record in the root zone. It must match the trust anchor.
  2. The root ZSK public key is also published as a DNS record. It is signed by the root KSK private key, thus linking the two keys together.
  3. The hash of the TLD KSK public key is published as a DNS record. It is signed by the root ZSK private key, further extending the chain.

The TLD zone chain links also consist of three parts:

  1. The TLD KSK public key is published as a DNS record; its hash must match the hash published in the root zone.
  2. The TLD ZSK public key is published as a DNS record, which is signed by the TLD KSK private key.
  3. The hash of the SLD KSK public key is published as a DNS record. It is signed by the TLD ZSK private key.

The SLD zone chain links once more consist of three parts:

  1. The SLD KSK public key is published as a DNS record. Its hash, as expected, must match the hash published in the TLD zone.
  2. The SLD ZSK public key is published as a DNS record signed by the SLD KSK private key.
  3. A set of DNS records — the ultimate response to the query — is signed by the SLD ZSK private key.

A resolver (or anyone else) can thereby verify the signature on any set of DNS records given the chain of public keys leading up to the trust anchor.

Note that this is a simplified view, and there are other details in practice. For instance, the various KSK public keys are also signed by their own private KSK, but I've omitted these signatures for brevity. The DNSViz tool provides a very nice interactive interface for browsing DNSSEC trust chains in detail, including the trust chain for discussed here.

Updating the Root KSK Public Key

The effort to update the root KSK public key, the aforementioned "trust anchor" was one of the challenging and successful projects by the DNS community over the past couple of years. This initiative — the so-called "root KSK rollover” — was challenging because there was no easy way to determine whether resolvers actually had been updated to use the latest root KSK — remember that cryptography and security was added on rather than built into the DNS. There are many resolvers that needed to be updated, each independently managed.

The research paper "Roll, Roll, Roll your Root: A Comprehensive Analysis of the First Ever DNSSEC Root KSK Rollover” details the process of updating the root KSK. The paper, co-authored by Verisign researchers and external colleagues, received the distinguished paper award at the 2019 Internet Measurement Conference.

Final Thoughts

I've focused here on how a resolver validates correctness when the response to a query has a "positive" answer — i.e., when the DNS record exists. Checking correctness when the answer doesn't exist gets even more interesting from a cryptographer's perspective. I'll cover this topic in my next post.

Written by Dr. Burt Kaliski Jr., Senior VP and Chief Technology Officer at Verisign | 11-Jan-2021 18:29

Threat Intel Expansion on Cosmic Lynx BEC Campaign's Recorded IoCs

Why go after individuals when you can get greater rewards by zooming in on more lucrative targets like large multinational corporations (MNCs)?

That's the premise behind the Cosmic Lynx business email compromise (BEC) campaign that brought several MNCs, many of which were Fortune 500 or Global 2000 companies, to their knees.

This short study takes a look at the indicators of compromise (IoCs) linked to Cosmic Lynx that Agari publicized. It also adds several IoCs that MNCs and practically any organization the world over should look out for at the very least.

What We Know about Cosmic Lynx

Here are some facts about Cosmic Lynx from the Agari research paper:

Cosmic Lynx is the name of the Russian cybercriminal organization behind 200 BEC campaigns targeting large MNCs globally, specifically in 46 countries across six continents, since July 2019.

The cybercriminals mimicked senior-level executives of Fortune 500 or Global 2000 companies to get to employees with access to the targets' finances. About ¾ of Cosmic Lynx's targets had titles like vice president, general manager, or managing director.

The campaign used a twofold impersonation scheme. They first pretend to be the CEO of an organization that is preparing to expand their operations to Asia. They ask the target employee to engage with an external legal counsel for the acquisition payments. The Cosmic Lynx actors then hijack the identity of a legitimate U.K.-based law firm lawyer to facilitate the transaction. They use Hong Kong-based mules to receive the stolen funds but also worked with others from Hungary, Portugal, and Romania.

On average, a BEC victim pays out US$55,000. Cosmic Lynx, however, asks each target for hundreds of thousands or even millions of dollars.

Cosmic Lynx mimics secure corporate networks to trick their targets. The artifacts linked to their campaigns include 65 domains and 61 IP addresses.

Additional Intel Every MNC Needs to Know

Apart from the artifacts Agari publicized, MNCs who wish to ensure utmost protection from Cosmic Lynx may also be wary of a few of the additional domains and IP addresses in Table 1 obtained from WhoisXML API threat intelligence sources, specifically DNS Lookup API and Reverse IP/DNS API. Note that these IoCs were confirmed malicious by VirusTotal. They may, however, not be directly related to the Cosmic Lynx campaign but use the same infrastructure.

Table 1: Nonpublicized Cosmic Lynx IoCsDomains Obtained from Reverse IP/DNS API and Dubbed Malicious on VirusTotalfrzamserngsirerive[.]comnaffltsirerive[.]comIP Addresses Obtained from DNS Lookup API and Dubbed Malicious on VirusTotal104[.]24[.]102[.]118198[.]54[.]117[.]199104[.]24[.]103[.]118198[.]54[.]117[.]200198[.]54[.]117[.]197204[.]11[.]56[.]48198[.]54[.]117[.]198

Of the 61 IP addresses collated and published by Agari, 37 were categorized as "malicious" on VirusTotal. The IP address 45[.]90[.]58[.]30 proved most dangerous as it hosted two other malicious domains (i.e., frzamserngsirerive[.]com and naffltsirerive[.]com) based on Reverse IP/DNS API results.

Out of the 65 domains, meanwhile, 64 were dubbed "malicious" on VirusTotal. Five of these (i.e., mail-transport-protection[.]cc [2 IP addresses], secure-email-provider[.]com [4 IP addresses], secure-mail-net[.]com [1 IP address], secure-mail-provider[.]com [4 IP addresses], and secure-ssl-sec[.]com [4 IP addresses]) proved especially dangerous as they were connected to 1 — 4 malicious IP addresses.

All in all, we obtained an additional two domains and seven IP addresses that were not included in Agari's list.

BEC attacks have been soaring to ever greater heights in terms of prominence. In 2019, the Internet Crime Complaint Center (IC3) received thousands of complaints from many companies across 20+ U.S. states. As such, the fact that more sophisticated threat groups like Cosmic Lynx are adding BEC campaigns to their arsenals should concern everyone. Protecting against BEC scams and other cyber attacks require not just keeping track of publicized IoCs but also scrutinizing said indicators using domain and IP intelligence tools to comb through all possible threat vectors. | 09-Jan-2021 05:32

SpaceX Starlink Beta Update

SpaceX began public beta testing of the Starlink Internet service late last October. At that time, testing was restricted to locations in the US, near 53 degrees north latitude, where coverage was concentrated. Since then, they have made many software updates based on the beta experience and have expanded the uninterrupted-coverage area by launching new satellites. By the end of the year, they had begun beta service in southern Canada and sent beta test invitations to a few UK users. A beta test had even been spotted in the Czech Republic.

SpaceX is actively seeking permission to operate in other nations. The legal ins-and-outs are confusing, but it has foreign affiliates in at least 5 European countries, and one of those, Starlink Holdings Netherlands B.V., has subsidiaries in 4-6 other European nations, including Germany and Greece, and in Argentina. SpaceX also has foreign affiliates in Australia, Canada, Chile, Colombia, Japan, Mexico, New Zealand, and South Africa and is in discussion with the Philippine government.

SpaceX will have to establish relationships with every nation they plan to operate in, and these affiliate companies are an important asset. They also have contacts through their launch business, for example, in Argentina, where they launched the SAOCOM satellites. This is obviously a fast-changing situation, and you can watch the list grow and find more information by following this FAQ wiki and the Starlink discussion on Reddit.

The cost of the beta service in relatively affluent nations seems to be roughly the same. In the US, beta testers are paying $99 per month for the service and $499 for a terminal, including a tripod and WiFi router. In Canada, it's $649 CAD and $129 CAD per month and £439 and £89 per month for service in the UK. There are no data caps for now, but that might change if demand outstrips the evolving capacity. Elon Musk claims that users can easily install the terminal themselves — just point it at the sky and plug it in — and, while that may be true for users with a clear view of the sky, others will have the added expense of creating custom mounts to avoid trees and other obstacles.

What about the prices in less affluent nations like the Philippines, South Africa, Chile, Columbia, and Argentina?

The percent of the rural population that can afford the current beta prices is lower in those nations than in North America and Europe and the gap is even wider in many other nations. SpaceX will either charge less in poor nations in order to fully utilize capacity or focus on organizations like schools and clinics rather than individual consumers.

In addition to licenses, SpaceX will need ground stations with fiber connectivity to the regions they serve. The map shown here shows ground stations in North America and Australia, but it is somewhat out of date. They are also working on ground stations in France and New Zealand and, as with licenses, Reddit is a good place to follow current developments. SpaceX has a clear lead over other would-be low-Earth orbit Internet service, providers. They have 874 working satellites in orbit, beta-testers in four nations, affiliates in others, and superior launch technology, but this is just the start of the game. Satellite broadband is a dynamic, multi-dimensional market; technology is changing rapidly and SpaceX has formidable competition. The situation reminds me of "IBM and the seven dwarfs" in the 1960s.

Written by Larry Press, Professor of Information Systems at California State University | 09-Jan-2021 02:48

.com Is A Clear and Present Danger to Online Safety

Shareholders benefit from registry operator providing sanctuary to online criminals and child sex abusers; Congress instructed NTIA to fix the problem — here's how.

"The Internet is the real world now."

This assessment was offered by Protocol, a technology industry news site, following the very real violence on Capitol Hill during the counting of the electoral college votes that officially determines the next president of the United States. The media outlet went on to say that, "[t]he only difference is, you can do more things and reach more people online — with truth and with lies — than you can in the real world."

Despite a seminal role as the Internet's originator and a global leader in technology adoption, Americans have often struggled with addressing the negative ramifications of technology. One example is the debate about violence in video games, which has been cited as possibly contributing to tragic incidents of gun violence in American schools. Concerns about possible correlations between what teenagers were seeing in video games and what a small number of students then chose to act out in real life sparked a national conversation involving policy makers, parents, teachers, students, video game companies and a myriad of other stakeholders seeking solutions that might address the issue.

This robust engagement by a broad spectrum of stakeholders, particularly the video game industry itself, sits in stark contrast to the anemic, trying-but-not-really, effort seen from ICANN and its registry operators and registrars to make domain name registrant identification data available to U.S. law enforcement, American consumers, intellectual property owners, and other stakeholders with legitimate access needs.

To briefly summarize, following the global adoption of the European Union's General Data Protection Regulation (GDPR), ICANN unilaterally determined that the WHOIS database — which had been operating since the modern Internet's inception and before ICANN was created — contravened the E.U.'s new law and relieved registries and registrars from contractual obligations that required the collection of WHOIS registrant data.

ICANN then convened the comically misnamed and hapless Expedited Policy Development Process, or EPDP, to convene stakeholders and develop a solution. This so-called expedited process — which has been declared a failure of the multistakeholder governance model by ICANN's Governmental Advisory Council along with its Business and Intellectual Property Constituencies and others in minority statements accompanying proposed recommendations — has taken years to develop a proposed solution that enjoys little support from the stakeholders that developed it, isn't likely to be effective, and, in any event, will be implemented at a leisurely pace expected to be completed somewhere between years from now and never.

Considering that the availability of registrant identification data to anyone with access to the Internet has been a stated Internet policy imperative of the U.S. government since before ICANN existed and was referred to simply as NewCo, it is fair to consider that there is more — much more — to this process failure than meets the eye.

The reality is that registry operators and registrars have never been fans of collecting, storing, and making registrant identifiers available. However, before ICANN unceremoniously disposed of WHOIS, every registry operator provided what is known as Thick WHOIS data — which, as the adjective suggests, includes registrant identifiers along with basic Thin WHOIS data about the domain name itself — with one glaring exception: Verisign.

Thick WHOIS was approved for implementation by ICANN's Board in February 2014. Nearly three years passed until a Proposed Policy Implementation plan was issued for .com, .net, and .jobs — all Verisign-operated — to transition to Thick WHOIS and also set deadlines of May 2018 and February 2019 for compliance. Five years would seem a generous allotment of time for complying with a data-collection rule that every single other registry and registrar were already complying with.

However, in October 2017, May 2018, October 2018, and March 2019, ICANN's Board granted six-month extensions requested by Verisign. Finally, in November 2019, ICANN's Board acquiesced to Verisign's fifth extension request by granting an indefinite deferral until a group of conditions are satisfied pertaining to implementation of the EPDP — developed replacement for WHOIS — which, as previously noted, is now known to be somewhere between years from now and never.

It is unclear what persuaded ICANN's Board that these delays affecting a majority of the Internet's domain names were in the public interest or anything other than a terribly awful idea. However, given recent evidence of ICANN's susceptibility to loosening consumer pricing safeguards after receiving $20 million contribution earmarked for "security, stability, and resiliency," one is forgiven for being curious about the street value of such pliancy.

What is beyond certain is that compliance costs weren't prohibitive for any of the much smaller and less profitable registries and registrars who all complied dutifully while their much bigger and much wealthier fellow registry skated by with endless delays. A sentient observer is forgiven for concluding that there is a double standard where, on one hand, the Internet's largest domain name monopolist enjoys a close working relationship and cozy alignment with ICANN that produces tangible beneficial outcomes while, on the other hand, are the hoi polloi, the great unwashed, and les miserables — otherwise known as everybody else.

Regardless, security, stability, and resiliency, or SSR, is an unfortunate, limited, and network-centric view of the mission for Internet policy that is dangerously outmoded. A more modish view, perhaps, would put humans at the center of Internet policy development and this may result in which could result in a more expanded and expansive view, not of authority or mandate, but of obligation and duty as more stakeholders began viewing safety as a necessary addition to the SSR trifecta.

That being said, the consequences of network-centric thinking are clear and terrible things are being perpetrated in the deep shadows cast by the void of registrant identifier data. The harm to American persons and property is undeniable and multiple U.S. federal agencies have weighed in with increasing alarm.

  • In 2006, then-Chairman of the Federal Trade Commission, Jon Leibowitz, traveled to ICANN's meeting in Morocco and warned that, "(t)he FTC is concerned that any attempt to limit Whois to this narrow purpose will put its ability to protect consumers and their privacy in peril."
  • More recently, in 2020, the FTC wrote to Congress and said, "(t)he FTC uses this (WHOIS) information to help identify wrongdoers and their location, halt their conduct, and preserve money to return to defrauded victims."
  • The Department of Homeland Security has also weighed in, saying in a 2020 letter that, "(s)ince the implementation of GDPR, HSI has recognized the lack of availability to complete WHOIS data as a significant issue that will continue to grow." DHS also cited in the same letter that lack of WHOIS information as hindering its response times to criminal activity.
  • Perhaps most damning, however, is the State Department's official statement of U.S. policy regarding GDPR which declared, "...WHOIS no longer functions properly. As a result, criminal investigations necessary to protect the public — including the most vulnerable, such as children who are subject to online sexual abuse — have been impeded."

Let that sink in for a moment: the official position of the United States government is that the deliberate dysfunction of WHOIS directly correlates to the sexual victimization of children. Then consider the words written in a letter by a consortium of groups combatting online sexual abuse of children which said:

"Verisign is uniquely unforthcoming. We have regularly worked and had conversations with just about every Internet company you can think of and quite a few you are unlikely to know. Only Verisign has been so utterly uncommunicative. This is a very poor show and runs completely contrary to the spirit of multi-stakeholderism."

??The letter continues in strong and unequivocal language:

"To put the matter plainly, it is immoral for a business to attempt to deflect responsibility by arguing these matters are the sole provenance of law enforcement and courts. As the dominant registry in the global system, Verisign should be taking a leadership position, adopting voluntary procedures to combat online child sexual abuse."

??Considering that a 2017 report of the Internet Watch Foundation found that 79% of all child sexual abuse webpages reside in .com and .net, one might consider appealing to those who are actually benefitting from the registration fees that are collected by Verisign for the domain registrations used for such heinous activity. A quick search online reveals that Verisign is, by and large, owned by a veritable cornucopia of the richest and most powerful institutional investment firms in the world.

There are too many to list here but, as of September 2020, the top four, each with an equity position that exceed $1 billion, are Berkshire Hathaway, Vanguard Group, BlackRock, and Renaissance Technologies. Far from enlightened, however, Verisign shareholders are, in fact, malefactors of great wealth who are profiting from registration fees that are paid to Verisign by intellectual property thieves, child sex abusers, and other criminals that operate in the Internet's largest registries. These bad actors remain unmolested because the registry operator not only didn't implement essential Thick WHOIS data requirements that protect Americans but also stood by and did nothing while ICANN incinerated WHOIS entirely.

It is important to keep in mind that this is a company operating risk-free legacy registries entrusted to it by the U.S. government with the explicit understanding that it could enjoy ridiculously massive profits in exchange for nothing more than protecting the public interest. Considering the literally gross profit margins being generated, the question for shareholders is simple: if they are benefiting from this, then they should know about it; if they aren't, then it shouldn't be happening.

The time for discussion and debate is over. There is too much bad faith, too many agendas, and too much water under the bridge. Fortunately, since there is no requirement that ICANN must oversee the collection of registrant identifiers — and it has more than proven itself incompetent and incapable of doing so — the solution is likely very simple.

Availability of registrant identifiers has always been a priority of the U.S. government and it should solve the problem in much the same way that the E.U. precipitated it: by setting a policy that must be complied with by every registry and registrar that maintains a domain name registration that is, or may be, accessed by an American citizen. Failure to comply should result in the levying of hefty fines and, if necessary, seizures of domain names and other assets just as the U.S. Treasury Department does for money laundering, terrorist financing, narcotics distribution, and other crimes. Why should online sexual abuse of children, illegal opioid sales, intellectual property theft, and other crimes that harm Americans be combatted any less vigorously?

In the past, domain name registrant data could be found online at and after ICANN's formation, it was granted a license to the InterNic trademark and website. But the trademark is still owned by the U.S. Commerce Department which should send a strong and unmistakable message of no-confidence to ICANN and its contracted parties by cancelling the license and reclaiming its property as the new forever home for registrant identifier data that is "available to anyone with access to the Internet."

Written by Greg Thomas, Founder of The Viking Group LLC | 08-Jan-2021 18:09

Technology Trends for 2021

The following are the most important current trends that will be affecting the telecom industry in 2021.

Fiber Construction Will Continue Fast and Furious in 2021. Carriers of all shapes and sizes are still building fiber. There is a bidding war going on to get the best construction crews and fiber labor rates are rising in some markets.

The Supply Chain Still has Issues. The huge demand for building new fiber had already put stress on the supply chain at the beginning of 2020. The pandemic increased the delays as big buyers reacted to the pandemic by re-sourcing some of the supply chain outside of China. By the end of 2021, there is a historically long waiting time to buy fiber for new and smaller buyers as the biggest fiber builders have pre-ordered huge quantities of fiber cable. Going into 2021, the delays for electronics have lessened, but there will be issues with buying fiber for much of 2021. By the end of the year, this ought to return to normal. Any new fiber builder needs to plan ahead and order fiber early.

Next-Generation PON Prices Dropping. The prices for 10- gigabit PON technologies continue to drop and are now perhaps 15% more expensive than GPON technology, which supports speeds up to a symmetrical gigabit. Anybody building a new network needs to consider the next-generation technology, or at least choose equipment that will fit into a future overlay of the faster technology.

Biggest ISPs are Developing Proprietary Technology. In a trend that should worry smaller ISPs, most of the biggest ISPs are developing proprietary technology. The cable companies have always done this through CableLabs, but now companies like Comcast are striking out with their own versions of gear. Verizon is probably leading the pack and has developed proprietary technology for fiber-to-the-curb technology using millimeter wave spectrum as well as proprietary 5G equipment. The large ISPs collectively are pursuing open-source routers, switches, and FTTP electronics that each company will then control with proprietary versions of software. The danger in this trend for smaller ISPs is that a lot of routinely available technology may become hard to find or very expensive when the big ISPs are no longer participating in the market.

Fixed Wireless Gear Improving. The electronics used for rural fixed wireless is improving rapidly as vendors react to the multiple new bands of spectrum approved by the FCC over the last year. The best gear now seamlessly integrates multiple bands of spectrum, and also meets the requirements to notify other carriers when shared spectrum bands are being used.

Big Telcos Walking Away from Copper. AT&T formally announced in October 2020 that it would no longer add new DSL customers. This is likely the first step for the company to phase out copper service altogether. The company has been claiming for years that it loses money on maintaining old technology. Verizon has been even more aggressive and has been phasing out copper service at the local telephone exchange level for the last few years throughout the northeast. DSL budgets will be slashed, and DSL techs let go, and as bad as DSL is today, it's going to go downhill fast from here.

Ban on Chinese Electronics. The US ban on Chinese electronics is now in full force. Not only are US carriers forbidden from buying new Chinese electronics, but Congress has approved funding to rip out and replace several billion dollars of currently deployed Chinese electronics. This ostensibly is being done for network security because of fears that Chinese equipment includes a backdoor that can be hacked, but this is also tied up in a variety of trade disputes between the US and China. I'm amazed that we can find $2 billion to replace electronics that likely pose no threat but can't find money to properly fund broadband.

5G Still Not Here. In 2021 there is still no actual 5G technology being deployed. Instead, what is being marketed today as 5G is really 4G delivered over new bands of spectrum. We are still 3 — 5 years away from seeing any significant deployment of the new features that define 5G. This won't stop the cellular carriers from crowing about the 5G revolution for another year. But maybe we've turned the corner, and there will be less than the current twenty 5G ads during a single football game.

Written by Doug Dawson, President at CCG Consulting | 07-Jan-2021 14:35

2020 Domain Name Year in Review

2020 — a year like no other.

The impact of COVID on the domain name industry was felt far and wide as ICANN meetings were held virtually, travel was cancelled, TLD launches were delayed, the topic of domain name abuse was front and center, and we all tried to navigate a "new" normal. Unlike many sectors, the domain name industry was fortunate and, in many ways, survived 2020 unscathed. Much of our industry was able to continue working from home after an initial period of adjustment. And although this last year was like no other, there were still a number of notable events. So with that, here are the top 10 domain news stories from 2020:

10. GoDaddy Launches GoDaddy Corporate Domains

GoDaddy announced the launch of GoDaddy Corporate Domains, a domain management solution for large companies. The launch was built off of GoDaddy's acquisition of Brandsight earlier in the year. Providing services to some of the world's most well-known brands, GoDaddy Corporate Domains is focused on enabling companies to contain costs, optimize portfolios and mitigate risks by providing unprecedented access to domain name and website data.

9. 50,000th UDRP Filed at WIPO

In November, WIPO announced that it "had registered its 50,000th "cybersquatting" case, a major milestone capping two decades of pro-consumer activity ensuring Internet users can easily find genuine sites for the brands they love and trust."

8. DotBrand TLD Activity Continues

Although a number dotBrand registries terminated their contracts in 2020, interest in a second round remains high. Moreover, according to, industry statistics show a consistent 15% growth in dotBrand domain registrations for 2020.

7. GoDaddy Announces VIP Program

GoDaddy announced the launch of the GoDaddy Verified Intellectual Property (VIP) program. The VIP program provides pre-vetted, well-known, and famous brands an escalation path to address IP abuse. It covers fraudulent domain registrations and infringing websites hosted with GoDaddy, among other forms of abuse. The program is currently by invitation-only.

6. ICANN Meetings Go Virtual

According to an ICANN announcement, "ICANN67, which was originally slated to be held in Cancún, Mexico, from 7-12 March 2020, was held as ICANN's first entirely remote virtual meeting following the declaration of COVID-19 as a public health emergency of international concern by the World Health Organization." Subsequently, ICANN68 and ICANN69 were also held virtually. Some saw the move to virtual meetings as a way to increase participation in the multi-stakeholder model as the meetings have become more accessible. The use of web conferencing, along with the chat functionality it provides, has allowed the voices of those who often don't speak to have their perspectives heard.

5. Neustar Registry is Acquired by GoDaddy

In April, GoDaddy announced the acquisition of Neustar's Registry business. The Neustar Registry business includes an extensive portfolio of top-level domains, including .biz, .co, .in, .nyc and .us, and supports more than 215 TLDs and approximately 12 million domains. This includes its Managed Registry Services business that provides end-to-end registry management for over 130 brand TLDs and 70 generic TLDs. As part of the transaction, GoDaddy will strictly adhere to a governance model that maintains independence between the GoDaddy registry and registrar businesses.

4. M&A Activity Continues

2020 saw a number of notable transactions. In January, OpSec Security completed its acquisition of the MarkMonitor brand protection assets from Clarivate Analytics. In February, GoDaddy announced its intent to acquire Uniregistry's registrar, marketplace and portfolio, and in April, the transaction was completed. In July, Clarivate announced its acquisition of CPA Global, and the transaction was completed in October. In November, Donuts announced its acquisition of Afilias, and the transaction was completed at the end of December. Also, in November, it was announced that private equity firm Clearlake Capital Group acquired Endurance International Group Holdings, which includes Bluehost, HostGator and

3. EPDP Working Group Publishes Final Report

In August, the Expedited Policy Development Process for gTLD Registration Data Working Group published their final report. In summary, the report's final recommendations included:

  • The creation of a central gateway which would receive requests for non-public data and route them to the appropriate registrar
  • A requirement for the central gateway to provide a response to every request
  • An accreditation authority that would accredit third-parties for use of the new system
  • Service Level Agreements by which contracted parties must abide
  • Quarterly reporting to the community

The final report was approved by the GNSO council in September and is now awaiting final approval from the ICANN Board, pending a cost-benefit analysis of the proposal.

2. NY Attorney General, Registrars and ICE Respond to COVID Domain Registrations

In March, New York Attorney General Letitia James sent letters to leading registrars asking them "to stop bad actors from taking advantage of the current crisis, as well as commit to removing the scam domains." Shortly thereafter, a number of domain registrars announced measures to combat fraud by blocking and suspending registrations. And in December, an "ICE investigation led to seizure of 2 fraudulent websites purporting to be biotechnology companies developing treatments for COVID-19 vaccine."

1. ICANN Board Withholds Consent for a Change of Control of the Public Interest Registry (PIR)

In November 2019, the Internet Society and PIR announced an agreement with Ethos Capital, a private equity firm, to acquire PIR and all of its assets, including the .ORG registry. Many thought that the deal was a slam dunk pending ICANN's approval. But in April, the ICANN Board made the decision to reject the proposed change of control and entity conversion request that Public Interest Registry (PIR) submitted to ICANN. According to a blog posted by Maarten Botterman, ICANN Board Chair, "After completing extensive due diligence, the ICANN Board finds that withholding consent of the transfer of PIR from the Internet Society (ISOC) to Ethos Capital is reasonable, and the right thing to do."

So what can we expect in 2021?

Hopefully, by the end of 2021, there will be a return to normalcy, including a return to face-to-face meetings. Along with that, I am hopeful that we will gain greater clarity on timing for the next round of gTLDs and the last remaining new gTLDs launch. Like most though, I am really just looking forward to putting 2020 in the rearview mirror.

  1. The information contained in this blog is provided for general informational purposes about domains. It is not specific advice tailored to your situation and should not be treated as such.

Written by Elisa Cooper, Head of Marketing, GoDaddy Corporate Domains | 05-Jan-2021 20:19

An Open Letter to Big Tech CFOs: Save the Internet Before You're Forced

Dear Chief Financial Officers of tech giants,

The internet is in crisis, and you can lead your organization to help solve the problem. You'll be well compensated, and you'll enjoy massive public relations benefits. I fear that if you don't, global governments will force your hand. There is a shortage of available IPv4 addresses but we are years away (possibly a decade or more) from IPv6 viability and adoption in North America. It's estimated that the top tech firms are sitting on over 150,000,000 dormant (unallocated) IPv4 addresses today. These unallocated IPv4 addresses are desperately needed to sustain the size, availability, and evolution of the internet's network and internet-capable devices.

I am concerned that if you, as big tech CFOs, don't willingly lease your dormant IPv4 addresses to other organizations, your hand will be forced, and you won't be fairly compensated. I am appealing to CFOs because leasing your unused IPv4 addresses turns an expense into a profit center of recurring monthly revenue. There is tremendous opportunity here to do the right thing and be well compensated for it.

However, failure to act now and provide your unallocated IPv4 addresses to the open market will likely result in little or no compensation for them. Recently the European internet registry RIPE NCC used a court order to seize IPv4 addresses from a bankrupt business. The goal is to reallocate them and help reduce the IPv4 shortage. This was previously unprecedented, but other governments will likely follow suit and become increasingly more aggressive with IPv4 seizures.

Furthermore, even the United States Department of Defense recognizes the current crisis and impending actions. In a recent bill proposal, the DOD was directed to sell off blocks of unallocated IPv4 addresses at fair market prices. The bill was ultimately not turned into law as the language was not included in the Senate version, but you must take note of what is happening. There is a finite number of IPv4 addresses (under 4.3 billion) and if they are not willingly introduced back into the open market by big tech CFOs, that action will be forced.

You have limited time to take advantage of this trifecta of opportunity: create a recurring revenue stream, avoid legislation and regulation, and enjoy the amazing public relations benefits of doing the right thing. Let's face it, people love your services but there is negative sentiment toward your organizations. We are relying on you to help save the internet. This is your chance to break through the congressional testimonies, antitrust lawsuits, censorship accusations, and all the other negative press big tech is facing. Instead, you can lead your field and do the right thing by reintroducing your IPv4 addresses into the open market.

So, we appeal to you to reintroduce your dormant IPv4 addresses to the open market. Lease them to organizations who need them. Enjoy the additional revenue and the positive PR. We are trusting you to do the right thing quickly.

Written by Vincentas Grinius, CEO and Co-Founder at IPXO | 05-Jan-2021 20:04

What Are the Connected Assets of Confirmed Fake FBI Domains?

Two months ago, the Federal Bureau of Investigation (FBI) alerted the public to a list of domains that could easily be mistaken to be part of its network. The list of artifacts contained a total of 92 domain names, 78 of which led to potentially malicious websites, while the remaining 14 have yet to be activated or are no longer active as of 23 November 2020.

How Does the Ruse Work?

It is common for threat actors to spoof the domains of legitimate and well-respected organizations to gain the public's trust in phishing emails and scams. Typical end goals include disseminating false information; gathering valid usernames, passwords, and email addresses; collecting personally identifiable information (PII); and spreading malware, leading to further compromises and potential financial losses.

Threat actors often mimic the domains of institutions like the FBI by slightly changing their legitimate counterparts' characteristics. Spoofed domain names may contain misspellings or use alternative top-level domain (TLDs), such as a .com instead of .gov.

Who Is at Risk?

U.S. citizens could unknowingly access the websites the spoofed domains point to while seeking information related to the FBI and its ongoing activities. Worse, threat actors could use email accounts seemingly belonging to the institution to convince people into downloading a piece of malware, putting their systems and data at risk.

Given the potential dangers, the FBI urges citizens to carefully evaluate the domains they access and scrutinize the messages they receive to make sure these are really part of the FBI network. Best practices include:

  • Verifying how web addresses, website names and content, and email addresses are spelled
  • Ensuring operating systems (OSs) and applications are always patched
  • Updating anti-malware and antivirus software regularly
  • Performing regular network scans
  • Disabling macros on documents downloaded from unfamiliar sources
  • Refraining from opening emails or downloading attachments from unknown senders
  • Never providing personal information via email
  • Using strong two-factor authentication, if possible
  • Enabling domain whitelisting apart from blacklisting
  • Ridding systems of unnecessary applications
  • Verifying that every website one visits has a Secure Sockets Layer (SSL) certificate
What Domains Should the American Public Be Wary Of?

The complete list of harmful and suspicious domain names identified by the FBI can be seen in Table 1 below.

Table 1: Confirmed Fake FBI Domainsagenciafbi[.]gafbiigovv[.]cominfofbi-unit[.]comauthefbi[.]gafbi-intel[.]comjohnsonfbi[.]comcyber-crime-fbi[.]orgfbikids[.]comlegalienfbi[.]comfbi[.]camerafbimaryland[.]orgplapper-fbi[.]comfbi[.]cashfbimaxwell[.]compowerfulfbi[.]ninjafbi[.]cafbimostwanted[.]infous-fbigov[.]comfbi[.]healthfbi-news[.]comvirtualfbi[.]comfbi[.]studiofbinews[.]gaxalienfbi[.]comfbi[.]systemsfbinews[.]onlinex-alienfbi[.]comfbi[.]xn--mgbayh7gpafbinigeria[.]orgfbi-fraud[.]comfbi0[.]comfbi-ny[.]comfbidefense[.]comfbibau[.]usfbioffice[.]mlfbienglish[.]comfbi2[.]comfbi-official[.]comfbifrauddepartment[.]orgfbi-unit[.]netfbiofficial[.]onlinefbifraud[.]primebnkonline[.]comfbi3262[.]livefbione[.]comfbiglobalgp[.]comfbi7[.]cnfbiopenthedoor[.]icufbigov[.]artfbi9[.]comfbiorganisation[.]onlinefbi-gov[.]networkfbi9[.]mefbiorganization[.]clubfbigrantinvestigation[.]comfbiagent[.]onlinefbipedophilerings[.]comfbiinspectionunit[.]comfbi-augustyn[.]plfbiphoto[.]comfbi-police[.]comfbiaustralia[.]comfbireserveco[.]bizfbi-c-d[.]com[.]cofbibau[.]defbireport[.]usfbicyberdivision[.]comfbi-bau[.]defbiusagov[.]onlinehdqkfbi[.]cnfbi-biz[.]comfbiurl[.]comic-fbi[.]orgfbiboston[.]xn--mgbayh7gpafbiusagov[.]comfbiwarning[.]clubfbi-c[.]com[.]cofbiusgov[.]comfbi-cd[.]com[.]cofbihelp[.]orgfbi-belote[.]comfbilibrary[.]mlfbigiftshop[.]shopfbispassport[.]gqfbi-pay[.]comfbiboston[.]com[.]jofbi99[.]cnfbi2000[.]comfbiusa[.]netfbi[.]com[.]jofbipublicidad[.]comfbi-usa[.]usfbi058[.]com

Domain malware checks via VirusTotal revealed that 66 of these 92 domain names (72%) were dubbed "malicious."

Connected Domains and IP Addresses to Steer Clear Of

Apart from the published artifacts, it is also possible to identify multiple connected domains and IP addresses as enumerated in Table 2, 17 of which also proved malicious. Some of the additional 5,140 domains may be malicious or at least suspicious.

Table 2: Malicious Connected IP Addresses and Domains According to VirusTotal as of 2 January 2021Malicious FBI-Identified DomainConnected IP Addresses(DNS Lookup API)Number of Connected Domains(Reverse IP/DNS API)cyber-crime-fbi[.]org192[.]64[.]119[.]7040fbi[.]camera34[.]102[.]136[.]180300+fbi[.]ca199[.]59[.]242[.]153300+fbi[.]studio34[.]102[.]136[.]180300+fbi-unit[.]net208[.]91[.]197[.]91300+fbi9[.]me217[.]70[.]184[.]38300+fbi-c[.]com[.]co34[.]102[.]136[.]180300+fbimaryland[.]org217[.]70[.]184[.]38300+fbimaxwell[.]com91[.]195[.]240[.]94300+fbimostwanted[.]info34[.]102[.]136[.]180300+fbi-news[.]com198[.]54[.]117[.]197300+fbi-ny[.]com208[.]91[.]197[.]91300+fbiorganisation[.]online34.102.136.180300+fbireport[.]us23.94.191.90300+legalienfbi[.]com34.102.136.180300+x-alienfbi[.]com34.102.136.180300+fbi-c-d[.]com[.]co34.102.136.180300+

Public IoC releases are indeed helpful to IT security teams whose main goal is to keep their organizations' infrastructure and confidential data protected at all costs. At times, however, they are not complete. As the short study featured in this post shows, users who want top-notch security may need to do extra research to include all possible threat vectors in their blacklists, including the use of Domain, IP, and other threat intelligence tools.

Written by Jonathan Zhang, Founder and CEO of WhoisXMLAPI & | 05-Jan-2021 19:08

The Machine Learning Operations Tooling Landscape Expands to 300

Happy New Year! There is no scarcity of Machine Learning Operations products being introduced to the industry. Since June of 2020, over 84 new ML toolsets incorporating but not confined to All-in-One, data-pipeline, and model training applications were born. In this list of almost 300 MLOps tools, there are 180 startups. Out of these 180 startups, more than 60 raised capital in 2020, and about two-thirds are fixated on data-pipelines and modeling-training. The Data-pipeline, associated with the Modelling and Training categories have led the way in part because 80% of the time necessary to construct models requires complete data sets, wrangling of data and understanding the sources of where the data resides, as well as imputing, compiling, organizing and labeling that data. 

In this post, my objective is to provide an overview of the ML landscape before another burst of investments takes place in 2021. More significantly, we believe this post will help you narrow the focus on applications that may resonate most with you so you can discover the organizations you are most interested in working with or investing in. For the current toolset itself, of these 280+enterprises, we'll highlight four groups to help you leverage and target: 

First is the newcomer category, the All-in-one Platform. Almost $2.6B has been invested in the category. This type of implementation includes most, if not all, ML procedure phases, from data preparation to model registration and evaluation. It also encompasses Data ingestion, Data preparation, Data exploration, Feature engineering, Model design and training, Model Testing, Deployment, comparison measurement and maintenance. This kind of application does not provide the ability to shop for models in the AI marketplace, but there are out of the box algorithms you can test, which you can apply for training and fitting.

What is already prevalent are AI Marketplaces, where you can shop or buy models that you think would be a good fit for your data. These AI marketplaces are quickly becoming a centralized hub to offer your models to ML designers. These out of the box models saves enormous time, with boosting and bagging features already pre-packaged.

The second area of focus of these new applications is in Data-pipeline. Wrangling-data is where all the heavy lifting takes place; Data Management, querying, labeling, arrangement, ingestion, augmentations, warehousing, versioning, and analytics most often reside. About 80% of the data preparation stage is undertaken in these sub-categories before any training or modeling can be presented. A complete dataset is what most ML engineers desire; some of these companies make it effortless to train new data sets for complexity. This category leads the way for investors, where almost $4B has been invested since 2015.  

Modelling and Training

The sub-categories here include interoperability, Framework, experiment tracking, Distributed Training, and Benchmarking. Investments in modeling and training were substantial in 2020, and the emphasis remains on interoperability. Discovering the best fit for your model is not time-consuming; it's more about experimentation. Once you detect the "problem statement" from your customers and are transparent about what the firm expects to learn or extract from the data, the experimenting of discovering an algorithm that matches the data ensures your model is precise. During experimentation, over-fitting and under-fitting will occur. You most likely will struggle between choosing a model with low bias and high variance or low variance but a higher bias. When this decision is presented to you, you will most likely discover the well-known strategy of creating an ensemble or a blended approach to make the data fit perfectly fit. While the number of modeling and training organizations in this data set is high, about 90 companies, but, only $300M has been invested in aggregate. Keep in mind that only 15-20% of your time building models are spent in this category.

Hardware will remain at the core of all investments. We will need more computing capacity to query our data with more sophisticated ML methodologies, such as Deep Learning and NNs being used more often with larger data sets. Suppose you don't care excessively about insights and only care about your data's fidelity and accuracy from your model. In that case, you might want to introduce more variables and complexities in the data to attain even greater precision. But this demands more powerful computing power. Two companies have raised the most money in the hardware space, accelerators, www. and, and many other companies focused on bringing AI to Edge devices (building chips optimized for inference on consumer devices with low power). In a recent Google AI conference, bring AI to the edge had a dedicated track. While we're still at the very beginning of 2021, I believe investments in Hardware startups. Of these 280+ companies, almost $3B has been investments in Hardware. 

Geography and Academic Interest in MLOps

From a geographic vantage point, most of the investments are primarily from the Bay Area, but some startups are now being established in other US hubs. Boston is a distant second. As the Bay Area remains the epicenter of MLOps. Having said this, the AI research scene appeared to have relaxed in 2020 — Google disrupted hiring for AI researchers (perhaps due to the pandemic), and Uber dismissed the entire AI team. However, the ML production scene is still evolving. Academia in AI/ML is proliferating. Amazon and other Cloud Providers are encouraging scholars to collaborate due to the immense shortage of data science professionals. This link to Amazon Science Scholars is one such site dedicated to recruiting scholars. Amazon is profoundly invested in R&D with hundreds of researchers and applied scientists committed to innovation across every company. The Amazon Scholars program has broadened academics' opportunities, not only at Amazon but additionally pretty much for every MLOPs firm. On slack channels of MLOps enterprises like, there is a channel dedicated to #academics. By applying research models in practice to solve challenging technical problems, MLOps companies are in a unique place to measure the impact of their research ideas. Our internal team recruited an ML professor, Mr Gordon Jemwa, who also functions as a moderator for our internet discussions at UCB. He is now a part of our ML Measurement Survey, helping to buildout the weightings for questions and answers, so we have an accurate readiness score and analysis to understand an organization's maturity before embarking on an ML implementation.

Written by Fred Tabsharani, AI/ML | Digital | Growth | 05-Jan-2021 01:55

WordPress Now Powers 39.5% of the Web

Last month in the annual "State of the Word" presentation for 2020, Automattic CEO and WordPress co-founder Matt Mullenweg announced that WordPress now powered 39% of websites, as measured by W3Techs. The number has actually grown a bit more since that time to 39.5%.1 Perhaps by next month it will pass 40%.

What is more remarkable to me is to see that in December 2020, for the first time, the number of sites using WordPress passed the number of sites that were NOT using any form of content management system (CMS). I've been watching these measurements over many years as I've evaluated what CMS to use for various sites. But there was always this "None" category that meant that W3Techs could not identify a CMS being used. These could be sites using just simple web servers, or custom designed systems, or possibly CMS's that have been customized so far that they are no longer distinguishable. (Or security people might have obfuscated the use of the CMS.)

To put it another way — there are now more sites using WordPress than custom servers.

I see this as an excellent milestone — congratulations to everyone involved in the WordPress ecosystem. The WordPress community has a mission to "democratize publishing" — to make it easy for everyone to be able to publish their information on the web using open source technologies. Just as open source technologies (ex. Apache and NGINX) are the dominant web servers used on the Internet (see the stats), the community wants to see WordPress become an open source "publishing layer" on the Internet.

As someone involved with 25+ web sites that all run on WordPress, I welcome this continued growth of the WordPress ecosystem. It means more options and less potential lock-in to closed, proprietary web systems. It will be interesting to see how far WordPress continues to grow!

  1. To be fair, this is not all the websites on the global Internet, but it's the top 10 million sites as ranked by the Alexa and Tranco lists (read more about their methodology). I see it as a good representative sample — and the W3Techs team have been tracking this info for many years now and so there is a long history for trend data. 

Written by Dan York, Author and Speaker on Internet technologies - and on staff of Internet Society | 04-Jan-2021 18:38

The Christmas Goat and IPv6 (Year 11)

This is the fourth year now with almost no snow during the Christmas Goat event here in Sweden, and so once again, you get a photo without any snow. Because of Covid-19 and 99.99% people working for home, I have not even seen the Goat live this year… What a crazy year it has been!

This year's measurements started very low as usual with 25% IPv6, but it made some improvements and landed with a little increment since 2019 at a total of 45%.

According to Google, Sweden has doubled the IPv6 traffic since 2019!! :) But we didn't double the Goat traffic… :(

Google also tells us that IPv6 is almost always over 30% worldwide now. The spikes are on weekends as it is more common now to use IPv6 at home than at the office. The Covid-19 situation also shows lower spikes resulting from more home workers since March 2019.

The ISP with the most hits from Sweden is the local ISP Gavlenet. They have more unique hits than the largest Swedish ISPs Telia, Tele2, Tre and Telenor combined. So, who is looking at the Goat if they are not coming from Sweden? This year I have used GoAccess with geo-IP to produce a more detailed statistic.

North America and Europe are dominating. In North America, it's almost only the US, and in Europe, it's UK, Finland and Germany generating the most visits.

Why is Sweden so bad when it comes to IPv6 while we often praise our fiber and Internet access to be the best? The first thing we can blame is our largest ISP Telia, which has not yet enabled IPv6 in their mobile network and only in some small parts of their fiber networks. The second thing we can blame is that it is "politically correct" to build networks with a "communication operator" (CO). The CO manages the active equipment from the subscriber port to one or many L3 devices, and then many ISP can connect there. If an ISP wants to activate IPv6, the CO first must enable IPv6 in their network, and if an ISP is connected to, say 50, open networks, it is not done in a week…

I am writing this on the 30th of December, 2020, and the Goat has not been burned down. It's a world record for the Goat as it has never survived this far four years in a row earlier!

Written by Torbjörn Eklöv, CTO, Senior Network Architect, DNSSEC/IPv6 | 30-Dec-2020 20:48

QAnon and 8Chan Digital Footprint Analysis and Investigation Expansion

In October, Brian Krebs reported that several websites related to 8Chan and QAnon went offline, albeit only briefly. That happened when the entity protecting them from distributed denial-of-service (DDoS) attacks, CNServers LLC, terminated its service to hundreds of Spartan Host IP addresses, including those associated with VanwaTech or OrcaTech, the Internet service provider (ISP) of most 8Chan and QAnon sites. As a result, the said companies' websites went offline, but only briefly, as Spartan Host obtained DDoS protection from Russia-based ddos-guard[.]net.

From the report, we obtained several IP addresses and domains related to 8Chan and QAnon, specifically:

  • 22 IPv4 addresses
  • 6 IP netblocks
  • 131 domain names
  • 9 subdomains

We used several domain and IP intelligence tools, such as Bulk WHOIS Lookup, Bulk IP Geolocation Lookup, and Reverse IP Lookup, to analyze the affected organizations' digital footprints. We presented our findings in a way that answer these questions:

  • Where are the IP addresses located?
  • Are the IP addresses still with VanwaTech?
  • How old are the domains?
  • Are the domains' WHOIS records publicly available?
Analysis of the Companies' IP and Domain Footprints Are the IP Addresses Still with VanwaTech?

As of the time of writing, two months have passed since the release of the list of associated IP addresses. It would be interesting to see if VanwaTech still maintains the IP addresses associated with 8Chan and QAnon. Bulk IP Geolocation helped us determine that of the 22 IP addresses, only five remained with VanwaTech as of 16 December 2020.

ISPNumber of IP AddressesN.T. Technology, Inc.12VanwaTech5FranTech Solutions2OVH SAS2CHINANET Guangdong Province Network1

The IP addresses still under VanwaTech's control are:

  • 203[.]28[.]246[.]100
  • 203[.]28[.]246[.]1
  • 203[.]28[.]246[.]123
  • 203[.]28[.]246[.]124
  • 203[.]28[.]246[.]138
Where Are the IP Addresses Located?

The five VanwaTech IP addresses are located in the U.S., along with 14 others that are related to 8Chan and QAnon. The other IP addresses can be traced back to China (1 IP address) and Canada (2 IP addresses). The locations are consistent with the fact that QAnon was originally an American movement and 8Chan's owner is an American.

What Are the Domains' Registrant Countries?

Like the geolocation of the IP addresses, most of the domains were registered in the U.S. But unlike the IP geolocation results, which only pointed to three countries, 12 registrant countries were named by Bulk WHOIS Lookup, as shown in the chart below.

How Old Are the Domains?

8Chan was established in October 2013 but was rebranded to 8kun in October 2019. QAnon, on the other hand, was created in October 2017. With both entities' age, it is surprising that about one-fourth (27%) of the domains on the list are more than 20 years old or created before 2000.

Around 14% of the domains were created within 2020 and so were barely a year old, while 37% were created within the last five years.

Are the Domains' WHOIS Records Publicly Available?

Lastly, we looked at the domains' WHOIS records and compared the number with redacted records against those whose details were publicly available. As expected, most of the domains — 87%, to be exact — were privacy-protected.

Obtaining More Digital Footprints

Using the remaining five IP addresses that point to VanwaTech as their ISP, we were able to uncover other possible inclusions to 8Chan's and QAnon's domain footprints. Reverse IP Lookup revealed all the domains that share the given IP addresses.

IP AddressNumber of Connected Domains and Subdomains203[.]28[.]246[.]10024203[.]28[.]246[.]1+300203[.]28[.]246[.]123179203[.]28[.]246[.]1242203[.]28[.]246[.]13831

While 8chan or 8kun is tied with controversial discussions about free speech, it has been linked to mass shootings. QAnon, on the other hand, mostly figured in disinformation campaigns and disproven conspiracy theories. Given these questionable clouds surrounding the two organizations, monitoring domains and IP addresses related to them is necessary. | 30-Dec-2020 20:04

DNSSEC Now Deployed in all Generic Top-Level Domains, Says ICANN

The Internet Corporation for Assigned Names and Numbers organization (ICANN org) announced that all of the current 1,195 generic top-level domains (gTLDs) have deployed Domain Name System Security Extensions (DNSSEC).

Why it's important: "DNSSEC allows registrants to digitally sign information they put into the Domain Name System (DNS). This protects consumers by ensuring that DNS data that has been corrupted, either accidentally or maliciously, doesn't reach them."

More work ahead: ICANN will be putting its focus on country code top-level domains that have not yet DNSSEC-signed to their zones. | 29-Dec-2020 23:05

Donuts Completes the Acquisition of Afilias

Donuts stated today that it has completed the acquisition of Afilias that was announced on November 19. Donuts' CEO Akram Atallah says the company is now ready to begin the integration plan promising minimal disruptions to customers. "We expect no changes in the short term, and ample notice on any changes that are decided. Security, stability and reliability continue to be our top priorities," he added. Ram Mohan, Afilias' Chief Operating Officer: "Together we look forward to delivering promising new technologies and best practices to our registry clients, registrars, employees and the entire domain community." | 29-Dec-2020 22:49

My Telecom Predictions for 2021

It's that time of the year for me to get out the crystal ball and peer into 2021.

The FCC Will Have Egg on its Face from the RDOF Grants. The reverse auction was a disaster in many ways, with a lot of the money going to companies that can't possibly do what they promised or companies that largely intend to make a profit by pocketing a lot of the grants. The FCC will have a chance to rectify some of the problems during the review of the long forms — but my bet is that they won't disqualify many bidders. If the FCC doesn't reject bad awards, it's going to be in the headlines for years when rural America figures out that they've been cheated out of good broadband. At a minimum, this will bring a close examination of whether reverse auctions are a good way to help rural broadband — because this auction was a disaster.

The FCC Will take a Path to ... It's impossible to guess what the FCC will do until we know the results of the Georgia Senate races — and predicting that is beyond my pay grade. If the Democrats prevail in both races, then I predict that the new FCC will start the process of trying to bring back broadband regulation and net neutrality. But even then, I don't expect much progress on the effort for most of 2021 — the regulatory process is slow, and there will inevitably be lawsuits challenging any decisions. If the Republicans win one or both Senate seats, then we're likely to see regulatory deadlock at the FCC for much, or even all of 2021. If there is a deadlock, then very little will get done, and even routine matters might get bogged down in partisan politics.

The Pandemic Will Continue to Slow Down the Industry. Even with a vaccine finally hitting the market, the first six months of 2021 will continue under pandemic restrictions. Towards the end of next year, things will start feeling normal again, although we may never return to the old normal. Expect a lot more Zoom visits in place of in-person meetings. It's still going to be a rough year for trying to hold live conventions.

Technician Shortage Becomes Noticeable. The baby boomer technicians are retiring in droves, which is already causing a shortage of the most experienced technicians for the next few years. We'll eventually fill the shortage with new technicians, but telecom companies are going to struggle to hire and retain technicians until we're able to close the gap.

Verizon will Join AT&T in Abandoning DSL. The big telcos are finally acknowledging that copper networks are dying and will step pretending to continue to patch up dying copper. This will mean that cities where the big telcos aren't building fiber, will become a true cable monopoly.

There will be Big Increases in Broadband Rates. Most cable companies have already announced higher broadband rates for 2021. But the biggest increase in rates will come quietly. Big ISPs will start vigorously enforcing data caps. Big ISPs will stop offering as many 'special' prices for new customers, and a larger percentage of customers will pay the list price for broadband.

5G Hype Will Continue. We will still not see any major 5G features introduced in 2021, so 5G will continue to be 4G delivered on new spectrum. But the cellular company hype will convince enough of the public that there is something special about 5G that they'll continue to buy 5G phones.

Web Video Meetings Will Improve. Software companies will improve the software for video platforms and will make it even easier and safer to conduct video meetings. Video software is also going to start being embedded in a lot of the software we use every day. This means that video meeting traffic volumes will grow even after the end of the pandemic.

Robocallers Won't be Deterred by the FCC Fixes. The companies that make a living with robocalls will find ways around the Shaken/Stir process, and the industry is probably a few improvements away from fixing the problem.

Hacking by Foreign Governments is Going to Shake the Security Industry. For the last few years, the security industry was mostly ahead of hackers, but security is a back-and-forth battle, and bad actors like foreign governments are going to take the upper hand for a while. Large corporations, government entities, and telecom companies will be running scared for much of the year. And like always, they'll regain the upper hand again, and the cycle will repeat.

Cellular Networks Will Continue to Degrade. The use of cellular data is currently doubling every two years, which is greatly stressing cellular network quality. The cellular carriers need to implement massive numbers of small cells, add new spectrum, and fully implement 5G to keep up with the growing demand. Since those solutions take years to implement, cellular network quality will continue to degrade in many places during 2021.

The FCC Maps Aren't Going to Get Better. We've been talking about this issue for years, but we're not going to see better maps in 2021.

Written by Doug Dawson, President at CCG Consulting | 29-Dec-2020 21:20

Telesat Update – Proposal for a Larger Constellation, Canadian and DARPA Contracts, IPO and More

Blue satellites are in polar orbits and red satellites are in inclined orbits. Click here for animation.

Telesat has a number of unique advantages and, if LEO broadband truly is a half a trillion-dollar addressable market, there will be room for multiple providers.

I've discussed Telesat's LEO broadband project in earlier posts, but the project has progressed, so an update is needed.

The original plan was to launch 117 satellites but that has changed. The phase 1 constellation will now have 298 satellites and the second phase will add 1,373 for a total of 1,671. The revised plan has been submitted to the FCC, and they expect it to be approved next year.

While Telesat applied to increase the number of satellites, the macro architecture remains the same as originally planned. There will be two sub-constellations, one with 351 satellites in polar orbits (98.98 degrees/1,015 km) and another with 1,320 in inclined orbits (50.88 degrees/1,325 km). This patented architecture will enable them to serve the entire globe. (I am not a lawyer, but I wonder whether that is something that can be patented).

The sub-constellation architecture will enable global coverage and low latency but will require sophisticated inter-satellite laser links (ISLLs). It turns out that DARPA is also developing Blackjack, a military LEO communication constellation, and since the military requires low-latency and the ability to quickly establish connectivity at arbitrary, perhaps remote locations, they require ISLLs. Telesat received a $2.8 million study contract for the design of the Blackjack bus in 2018 and was awarded $18.3 million to develop and test Blackjack last October. In that role, Telesat selected Mynaric to supply ISLLs and may use them in their satellites as well.

The Canadian government has granted Telesat C$85 million to support research and development and another C$600 million to subsidize Internet connectivity in rural Canada. The R&D funds will go to early satellite tests and will support approximately 500 professional jobs and the rural connectivity funds are like those in the US where SpaceX was awarded $885 million.

While Telesat will have global coverage, they will focus on Canada and the north at first, and that will put them in competition with OneWeb, which plans to do the same. OneWeb will have a head start since it already has a distribution partner and plans to begin service in the north next fall, but Telesat will need fewer ground stations because of it's ISLLs and it already has 10 GEO teleports in North America and two others in Hawaii and Austria.

Telesat has run tests and done demonstrations of many potential applications since launching a test satellite in 2018 and Lynette Simmons, Director of Marketing and Communication, says the system design is complete and they expect to announce the prime contractor very soon. They will finance the constellation by restructuring and a public stock offering. President and CEO Dan Goldberg is confident that they will be able to raise sufficient capital based on their track record. The company is over 50 years old and is a large, global GEO satellite operator that has been broadcasting television since 1978, providing Internet connectivity since 1996, and they have been doing advanced research for both the US and Canadian governments. Goldberg thinks LEO broadband is a half a trillion-dollar market, and you can see his pitch in the following video. (A BNN Bloom interview with Telesat CEO here)

Let me add a little speculation. Nearly two years ago, Telesat signed an agreement to use the software-defined network (SDN) platform Google had developed for Project Loon, which provides connectivity using balloons in the stratosphere. If Telesat's system design includes Google's SDN, Telesat LEO satellites may be able to interoperate with Google's balloons. Going a step further, they may one day interoperate with Telesat's GEO satellites, creating an integrated three-layer network routing packets between as well as within layers depending upon the service level required by a customer or application. An integrated network could also provide a fallback in the case of equipment failure.

A reader recently commented on my Twitter feed that Telesat was "moot" because SpaceX has superior launch capability and a head start, and OneWeb, which, like Telesat, is forsaking the consumer market for commercial applications like 5G backhaul, is a direct competitor. He was wrong. Telesat has a number of unique advantages. If LEO broadband turns out to be anywhere near the half-trillion-dollar addressable market Goldberg expects, there will be room for multiple providers.

Updates Dec 28, 2020:

A reader pointed out that Telesate has also committed to investing the revenue from their sale of C-band spectrum in the LEO constellation. That spectrum will be used for 5G mobile connectivity and will enlarge the prospective mobile-backhaul market.
Speaking at a webinar on "Building NewSpace," Michel Forest, Telesat Director of LEO Systems Engineering, says there is significant demand for LEO among their current GEO customers who want low latency and more capacity in specific places like airline hubs and ports. (33:37)

Written by Larry Press, Professor of Information Systems at California State University | 29-Dec-2020 17:48

Video and Broadband Demand

One of the obvious drivers of broadband usage is online video, and a study earlier this year by the Leichtman Research Group provides insight into the continuing role of video growth in broadband usage. The company conducted a nationwide poll in the US looking at how people watch video, and the results show that Americans have embraced online for-pay video services.

The survey concentrated on what it calls SVOD service (subscription video-on-demand). In the industry, this category includes pay services like Netflix and Disney +, but does not include the online services that mimic the networks covered by traditional cable TV like Sling TV. This is the eighteenth annual nationwide survey by LRG that looks at video usage.

One of the most interesting results of the survey is how many households still buy some form of pay-TV service. 74% of homes pay for a full cable TV service. About 2/3 of all households still buy traditional cable TV from cable companies or satellite providers. This means that 8% of homes now buy video an online service that mimics traditional TV channels such as Sling TV, Hulu + Live TV, YouTube TV, or fuboTV. That's no longer a surprising statistic when we saw in September that Hulu + Live TV is now the fifth-largest provider of video, in front of Verizon.

Even considering that many homes are buying full video packages online, the statistics show a continuing decline in traditional TV viewers. Where 74% of homes buy some form of cable service today, that's way down from the 85% of homes in 2015 and the peak of the TV pay-market at 88% in 2010. This shows that 14% of all homes have stopped buying pay-TV in the last decade.

Distribution of Pay-TV and subscription video-on-demand (SVOD) – Source: Pay-TV in the U.S. 2020, Leichtman Research Group

The survey summarized cable households in another interesting way:

  • 60% of homes pay for a full TV service and also buy at least one SVOD service like Netflix.
  • 14% of homes buy a full pay-TV service but do not subscribe to an additional SVOD service like Amazon.
  • 20% of homes buy an SVOD service like Netflix but don't pay for a full TV line-up.
  • Only 6% of homes don't pay for any TV service.

The survey also summarizes the use of SVOD service like Netflix in a way I hadn't seen before:

  • The survey showed that 79% of households using traditional cable TV (from a cable company or satellite TV service) also purchase an online video service.
  • 76% of households that don't subscribe to traditional cable TV pay for an online video service.
  • However, 96% of customers who subscribe to an online video service that mimics traditional cable TV also buy at least one additional for-pay TV service.

The survey also shows that age is still a factor for paying for a full for-pay TV service. 81% of adults over 55 have a for-pay TV service, 76% of those between 35 and 54 have a pay-TV service, and only 63% of those between 18 and 34 have pay-TV service. This is the trend that is making TV less valuable for advertisers that want to reach younger audiences.

A few other interesting factoids coming out of the survey:

  • 38% of consumers who have moved in the last year do not buy a full for-pay TV service. This verifies something we've seen in many surveys where respondents say they are thinking of cutting the cord, but then don't do it. Perhaps when life presents an easy option to cut the cord, such as when moving, consumers finally decide not to resubscribe to pay-TV. That would imply there is still a large potential pool of cord-cutters in the market.
  • 13% of all TV households use a TV antenna rather than subscribe to a pay-TV service for local channels.

Buried somewhere in these statistics are the millions of rural homes that don't have the option to stream video.

Written by Doug Dawson, President at CCG Consulting | 24-Dec-2020 18:10

Attack Surface Discovery: A Review of FINRA-lookalike Domain and Linked IoCs

NPOs and NGOs are no stranger to cyber attacks targeting their members. A few examples of recent phishing campaign subjects include:

Mercy Corps and the International Federation of Red Cross and Red Crescent Societies in 2020: Along with various aid groups, suffered from rising cyber attack volumes capitalizing on the COVID-19 pandemic.

Political and NGOs in South and East Asia from 2014 to 2020: Perpetrated by targeted attack group Bronze President and used a combination of specially crafted and publicly available tools to monitor target organizations' activities to discredit them or steal their intellectual property.

United Nations Children's Fund (UNICEF) in October 2019: Used fake domains such as session-services[.]com and service-ssl-check[.]com.

More recently, phishers used a Financial Industry Regulatory Authority (FINRA) look-alike domain in an attempt to breach several of its members' networks. Tasked to oversee 624,000 brokers in the U.S., attacking FINRA's clientele could yield a hefty sum should phishing email recipients fall for the ruse.

How FINRA Members Can Avoid Getting Phished

Publicly available information on the phishing scam identified the domain invest-finra[.]org as an indicator of compromise (IoC). Using a bevy of WHOIS, Domain Name System (DNS), and IP intelligence tools, we listed telltale signs of typosquatting domain use (even if its WHOIS record has been redacted) that FINRA members could take note of to avoid getting phished.

WHOIS Lookup: Used to spot differences that could point to malicious activity by comparing the WHOIS records of the official FINRA domain (finra[.]org) with that of the phishing domain (see Table 1).

Table 1: Differences between Official and Look-Alike FINRA DomainsWHOIS Record DetailLegitimate FINRA Domain(finra[.]org)Phishing Domain(invest-finra[.]org]Sign of Potential Malicious Activity?Domain age~13.5 years36 days (at the time of writing)More than 70%of newly registered domains (NRDs) are malicious, suspicious, or not fit for, LLCGandi SASOrganizations typically use the same registrar for all their domains.Registrant contact informationPublicly available; the country is the U.S.Redacted; the country is FranceFINRA only supports brokers in the U.S. and is affiliated with the said country's government. So why would it use France as its registrant country or a French WHOIS redaction service?

Reverse WHOIS Search: Used to find domains that contain "finra." Some of these may not be publicly attributable to the organization. If that is the case, further scrutiny may be required should other domains that have not yet been disclosed and not under FINRA's control figure in other attacks.

A lookup for all domain names containing the string "finra" yielded a list of 439 domains. Of these, only 365 are possibly owned and maintained by the organization because they shared the legitimate FINRA domain's registrant organization name and country. Around 16% or 71 domain names do not share the said data points or could not be publicly attributed to FINRA. Among the non-publicly attributable domain names, finra-apple[.]com proved malicious.

DNS Lookup API: Used to determine IP addresses related to the fake FINRA domain. Our search revealed the IP address 217[.]70[.]184[.]38, which proved malicious when subjected to a search on VirusTotal.

Reverse IP/DNS Lookup: Used to identify domain names that resolved to the same IP address as invest-finra[.]org. We uncovered several domain names, some of which were dubbed "suspicious" by VirusTotal (e.g., 0011100[.]xyz and 001952[.]xyz) and others "malicious" (e.g., 020408[.]xyz and 0a0074066c49886a39b5a3072582f5d6[.]net).

The Attack Surface Discovery Lowdown

By utilizing various WHOIS, DNS, and IP intelligence sources, we were able to proceed with an attack surface discovery analysis and obtain more IoCs apart from the one that has been publicly reported. These include:

  • 71 domain names, one of which has proven malicious
  • An IP address that was also dubbed "malicious"
  • At least 300 domains that resolved to the same IP address as invest-finra[.]org, some of which were cited as "suspicious" and others "malicious"

Companies that liaise with FINRA could protect their systems and networks better from phishing and more sinister attacks by including additional IoCs like the following ones to their blacklists:

  • finra-apple[.]com
  • 217[.]70[.]184[.]38
  • 0011100[.]xyz
  • 001952[.]xyz
  • 020408[.]xyz
  • 0a0074066c49886a39b5a3072582f5d6[.]net

As this short study showed, consulting as many available threat intelligence sources helps organizations maintain a more secure network by identifying as much of their potential attack surface as possible. | 23-Dec-2020 16:41

Overcoming Obstacles to Full-Scale Business Intelligence Adoption in 2021

Data analytics isn't just for large organizations anymore. As businesses and community collectives increasingly move their operations into digital spaces, the vast amounts of data being collected pose an opportunity for them to get to know their stakeholders better.

While the security implications of this migration are hard to be taken lightly, the potential for game-changing insights is likewise enormous. Indeed, today's consumers are demanding highly personalized experiences, and this isn't a trend that's going to disappear anytime soon, according to McKinsey and Company.

The right business intelligence platform can help you unlock the power of the data you collect without exposing you to governance pitfalls. However, many organizations face hurdles when adopting a full-scale BI solution. Here are five obstacles that you will face and how to overcome them.

Data Accessibility Limitations

Cross-platform accessibility is the key to delivering value. These days, many self-service BI tools can be accessed from the device of your choice. However, you'll need to work with everyone in your organization's leadership, as well as on the vendor's side, to make sure the solution integrates well with your goals, and without exposing any new vulnerabilities.

What's more, easy access might lead to incorrect ad-hoc solutions being drawn, so make sure you install analytics standards that serve as guidelines for your reporting.

Mike Ferguson has witnessed the analytics industry grow from a nascent one to the exciting one it is today. Despite this rise, companies still have issues accessing the insights they crave. "In a world where data complexity is on the rise, companies have to put in place the foundations for a data-driven enterprise that enables the business to quickly and easily find the data they need, know that they can trust it, and be able to access it quickly to deliver value," notes Ferguson in a think piece about business-ready data.

Poor Data Quality

Your analytics conclusions are only as good as the information that goes into them.

While companies are up to their necks in data, not all of it is relevant or of good quality. Two examples of data-related issues that you'll face are relevant data being deeply buried in your systems and your analytics reports delivering convoluted results that stunt progress.

The solution to these issues is to implement a strong data quality management program. Start by having your business executives define their goals and relate this to the data you collect. Evaluate your data for integrity, uniqueness, validity, accuracy, and consistency. For example, a data point that repeatedly appears in your data sets isn't unique, even if it might be valid.

You'll need to install processes that identify non-unique data points and clean them. Neglect this, and you'll end up creating reports that exaggerate that data point and lead you to draw incorrect conclusions.

Perceived High Costs

Talk to any senior executive about implementing BI, and the first thing you'll hear is that it's probably too expensive. The fact is that many mid-sized businesses look at enterprise analytics infrastructure and think they need the same number of data warehouses, IT infrastructure professionals, an army of data science analysts, and a headache around making sure all of the data pipelines are secure.

This degree of investment was probably necessary during the early days of BI adoption. These days, however, self-service BI platforms have truly democratized data analytics for organizations of all sizes. These tools will help you examine weaknesses in your organization and increase your ROI.

As a result, your investment in these platforms more than justifies itself. The right BI tool for your organization will allow you to gather data from various sources and bring them together into an easily deciphered dashboard.

Some companies increase their costs by choosing tools that don't allow them to connect disparate forms of data. As marketing expert Neil Patel says in his guide to business intelligence adoption, "Make sure you find a business intelligence tool that makes it easy for you to connect with your existing data sources. It's worth noting that not every business intelligence software on the market will integrate with specific databases. So don't make assumptions; always double-check that your data is compatible with the software in question."

An agile and cloud-based BI platform will help you make sense of your data at a price that will almost certainly result in high ROI. The key is to make sure its features suit the nature of your data.

Lack of Organizational Adoption

Most BI projects start as well-intentioned pilots that generate great results but fail to scale to the entire organization. It's a phenomenon Gartner analyst Rita Sallam has witnessed over and over. "Over the past 10 years, with the rise of big data, we've done a great job at storing and managing content, or X data. What we haven't done is a great job at using that pervasively across the organization," she notes.

Your BI program must receive buy-in from key stakeholders throughout your organization before the pilot begins. Your pilot should also measure the right KPIs. Many organizations measure vanity metrics that make them feel good but don't result in any insight. The ultimate aim of a BI program should be to democratize data across your company.

With this in mind, involve both business and technical groups in your pilot project and have them agree on common goals. This exercise will help remove any barriers between the two functions.

Asure your employees that analytics are there to enhance the quality of their work, not replace them. As BI permeates your organization, you'll find that these exercises will help drive a culture of data throughout, and you'll face a low degree of resistance.

Flying Without a Strategy

Defining a clear BI strategy is a challenging task. Most companies think of BI as driving decisions in certain business units.

Tying your BI program's objectives to the most critical results that your company wishes to achieve is a great way to align every aspect of your organization to your BI program. This approach will also help you avoid measuring the wrong KPIs.

As an analytics expert, Chris Penn states, having an overarching data strategy is mission-critical, and the best strategies are those that focus on boosting sales. "When we have to approach data-driven marketing, and data-driven strategy," he writes, "we've got to approach it from the perspective of a sale, not what's best for the company, not what's best for the analytics department or the IT department of the marketing department."

A Necessity in the Information Age

As the amount of data businesses collect increases, your organization must focus on a BI program that is driven by quality. Overcoming the five hurdles you've read about here will help bring your organization onto the right path and allow you to become truly data-driven.

Written by Evan Morris, Network Security Manager | 21-Dec-2020 21:23

NTIA Objects to Planned Auction

Agency asserts interest in trademark protections for Internet's largest domain name registry

According to media sources, the National Telecommunications and Information Administration (NTIA) wrote to Verisign last Friday, objecting to the company's plan to auction to the highest bidder. The planned release for — described by the Second Amendment to the .com Registry Agreement and intended as a pilot for the remaining reserved single-character .com names — involved an opaque consideration process that ignored community input and set aside hard-won trademark protections developed by stakeholders in order to maximize dollars earmarked for an unidentified cadre of non-profit organizations.

The two-page letter asserts that:

  1. NTIA retains rights pertaining to single-character .com names stemming from the fact that it was the U.S. Government which required single-character names at the second-level to be reserved from initial registration in 1993;
  2. the release of, and presumably the other remaining reserved .com single-character names, requires NTIA approval; and,
  3. such approval is likely to be withheld unless the release procedure incorporates "policies, procedures, and protections used for all domain names."

This last point vindicates the position of the Intellectual Property constituency (IPC) and other stakeholders that specifically called for the implementation of the Trademark Clearinghouse and Sunrise Period mechanisms that were created to protect brand owners and trademark registrants. The omission of these hard-won protections can't be considered an oversight because it was quite deliberate. The views of the IPC and others were offered during yet another example of the kangaroo court-style of public comment period that seems to be ICANN's new normal and where community input is solicited only to fall upon deaf ears before hitting the circular file.

In this case, ICANN's board dismissed intellectual property rights protections as inapplicable because the .com Registry Agreement is, among other things, a decaying relic from prehistoric times that predates the development of modern safeguards for intellectual property rights online. ICANN's board also cited as precedent the release of .biz (2008), .info (2010), and .org (2011) single-character reserved names without a sunrise period — it's a little surprising that the board seems to have forgotten that sunrise periods didn't exist until 2013. But it's difficult to be too surprised considering that when asked about the vote to approve the auction at ICANN's Kobe meeting, a number of board members didn't recall the issue being raised in the first place, let alone voted on.

Given the propensity of Verisign and ICANN to seek absolution in the loopholes of an obsolete legacy agreement, stakeholders might have a brighter future if they follow their lead and stop asking for what isn't in the .com Registry Agreement and get laser-focused on what is — particularly those elements that a federal appellate ruling says "unlawfully restrain trade." In the meantime, NTIA's timely intervention reinforces the position taken by the IPC and others that the release of — along with the other remaining single-character .com names — must be subject to the same procedures, policies and protections as every other newly available domain name.

NTIA's concerns mostly pertain to the charitable contributions, possibly constituting a price that exceeds what is currently allowed by the Cooperative Agreement and that this would require explicit NTIA approval. The agency further questioned whether the compensation for auction vendors would also exceed the allowable wholesale price for .com domain names.

The gravity of the letter is greatly expanded by considering that it was signed and sent last Friday by NTIA's then-acting administrator, Adam Candeub, who has since been tapped by the White House for a new role as Deputy Associate Attorney General — a Justice Department job which oversees the Antitrust Division, among other things. It is counter-intuitive to assume that he would abandon such recently demonstrated interest in these issues after being elevated to a role that offers such greatly expanded opportunities for addressing them.

Another factor is the potentially shortened timeframe to effect solutions. January 20th is likely an important inflection point for Mr. Candeub and, thus, should be seen similarly for Verisign and ICANN as well. Ignoring NTIA's letter or trying to run out the clock would be myopic and risks drawing greater scrutiny from DOJ and others that could carry over into the next administration — which isn't far-fetched considering the president-elect was Vice President when the 2012 price cap was imposed. Any expanded inquiry would necessarily seek to know the motives of two organizations that, on paper at least, have no interest, rights, or standing for these single-character names beyond the maximum allowable price of $7.85 — in essence, it would become necessary to determine why ICANN and Verisign accepted such rapidly ballooning risk by seeking so mightily to deny the rights of others for something that they themselves hold no rights to whatsoever?

Albert Einstein once said that "problems cannot be solved with the same thinking that created them." However, given these evolving circumstances, it doesn't take a rocket scientist to know that hoping intransigence will result in an outcome other than a full competition review of .com — a long-overdue follow-up to DOJ's 2012 analysis which has already been requested by U.S. Senators Ted Cruz and Mike Lee — is putting a lot of shareholder value at risk for a gamble that, in retrospect, will be seen as arbitrary and capricious.

Besides — hope is not a strategy.

Written by Greg Thomas, Founder of The Viking Group LLC | 18-Dec-2020 22:02

A Brief OSINT Analysis of Charming Kitten IoCs

Charming Kitten is a cybercriminal group believed to be of Iranian origin, which was first seen in 2014, but had been active for years after the initial detection. The group use an intricate web of methods such as spear phishing and impersonation. They even create fake organizations and personas, complete with email and social media accounts.

The group's targets are mostly individuals in the media, human rights, and academic research fields. Unlike other cyberespionage groups that aim to infiltrate victims' networks, one of Charming Kitten's primary objectives is to hack into social and email accounts and gather details about victims.

Clear Sky released a comprehensive report about the group, presenting 240 malicious domain names, 86 IP addresses, and 28 email addresses as indicators of compromise (IoCs). We studied these IoCs in light of recent sightings of the group.

Malicious Domains

Of the malicious domains cited as Charming Kitten IoCs, we gathered 45 domain's records with our bulk WHOIS lookup tool. Facebook has acquired two domains based on their registrant email — com-video[.]net and login-account[.]net. The Digital Crimes Unit of Microsoft also claimed two other domains — yahoo-verification[.]net and yahoo-verify[.]net. Healthcare Management Solutions also now owns sadashboard[.]com.

Interestingly, other malicious domains still can't be attributed to the spoofed companies, including the following:

  • fb-login[.]cf
  • drives-google[.]com
  • microsoft-upgrade[.]mobi
  • gmal[.]cf
  • hot-mail[.]ml

Charming Kitten and other groups could still use domains like these to imitate brands in phishing attacks.

IoCs Not Reported as Malicious

Some domains included in the list of IoCs were not reported as malicious or at least suspicious as of the time of writing despite their involvement in Charming Kitten attacks. The domain britishnews[.]org, for example, was found to redirect to britishnews[.]com[.]co, a made-up news website that hosted a penetration testing tool called "Browser Exploitation Framework (BeEF)." The domain is not tagged "malicious" even if it resolves to a malicious IP address.

The table below shows other domains that are not tagged as malicious and their associated IP addresses revealed by DNS Lookup. The IP addresses were then run on VirusTotal to check if they are malicious.

Domain NameAssociated IP Address (from DNS Lookup)Tagged "Malicious" on VirusTotal?app-documents[.]com88[.]212[.]247[.]68Nobritishnews[.]org52[.]58[.]78[.]16Yesemiartas[.]com103[.]224[.]182[.]250Yesmy-healthequity[.]com45[.]56[.]79[.]23Yes45[.]33[.]2[.]79Yes45[.]33[.]23[.]183Yes45[.]79[.]19[.]196Yes198[.]58[.]118[.]167Yes96[.]126[.]123[.]244Yesuserslogin[.]com91[.]195[.]241[.]137No

Since these domains and a couple of IP addresses are not cited as malicious, they could be used successfully in penetration attacks. Reverse IP Lookup also showed that all of them could be shared IP addresses since they have hundreds of connected domains. Implementing an IP-level blacklist for malicious IP addresses may be a good approach for organizations.

IP Addresses

To recall, 86 IP addresses were tagged as Charming Kitten IoCs. The IP addresses in the table above are not among the IoCs mentioned in the Clear Sky report. As such, continuous monitoring of malicious domains is needed to ensure that IP address blacklists stay up to date.

We used IP Geolocation to see the originating countries of the IoCs and found that a majority were from the U.S., followed by the Netherlands, France, the U.K., and Germany. Aside from Iran, these countries are also where certain of the group's targets were located. In fact, the group was seen impersonating German journalists in July 2020.

Charming Kitten IoCs, like those of other cybercrime groups, may continue to evolve. Some domains and IP addresses would be dropped, while others may be claimed by the legitimate entities they imitate. Still, some IoCs are too effective to let go and so could still be weaponized by Charming Kitten or other groups.

The key takeaway for organizations is that constant monitoring of known IoCs is necessary for utmost protection. | 18-Dec-2020 21:53

Cast Your .vote for the Most Interesting New gTLD Development in Q4

2020 has been extremely eventful, so it follows that the domain industry has continued to experience perpetual change, progress and uncertainty in the last three months of the year. In our Q4 New gTLD Quarterly Report, MarkMonitor experts analyze topical registration activity, launch information, .brand growth and DNS abuse, and share a list of upcoming industry meetings for 2021.

Use cases in the online political and Environmental, Social and Corporate Governance (ESG) spaces

As the United States presidential election dominated the news in November, we examined political and voting-related TLDs and their registration counts after the 2016 and 2020 elections. Can you guess which TLD — .democrat, .gop, .republican, .vote, .voting or .voto — had the highest growth in that timeframe? Check out the new report for the answer.

We also review the recent introduction of the MarkMonitor Domains for Good Program, and the types of new gTLDs our inaugural group of mission-driven and non-profit organizations secured in the Environmental, Social, and Corporate Governance (ESG) space.

TLD launches never stop; they just go on to General Availability

While it may seem like nearly all of the TLDs allocated from the 2012 New gTLD Program have launched successfully by now, there is still a lot of activity expected for late 2020 and early 2021. We discuss timeframes and launch requirements for everything from sports and jobs to personal hygiene and cities in the new quarterly report.

Trend lines still moving, from DNS abuse to .brand registration growth

ICANN and the domain industry continue to take abuse in the Domain Name System seriously. In this quarterly report, we review the recent results from the ICANN Domain Abuse Activity Reporting Project, as well as Interisle Consulting Group's Phishing Landscape 2020 study.

Not all use of New gTLDs is negative; indeed, we found that in the .Brand space, there continues to be some growth in registration numbers as well as shifts in rankings across some industries and geographies.

2020 wrapped up with a bow

Following INTA's Annual Meeting, held virtually in late November, the 2020 meeting season is over. We wish our colleagues, customers and industry partners a well-earned rest through the end of the year! That said, given that the domain industry never sleeps, 2021 should be another year of interesting progress and advancement in the gTLD space. Please reach out to your Client Services Manager or contact us here for your domain management needs in this year or the next.

Read the Q4 New gTLD Quarterly Report.

Written by Chris Niemi, Manager, Domain Services at MarkMonitor | 18-Dec-2020 21:10

MarkMonitor Releases New gTLD Quarterly Report for Q4 2020

New gTLD Quarterly Report, Q4 2020
Download Report

With the United States election in the news, domain registrants cast their online .vote. The Q4 New gTLD Quarterly Report from the MarkMonitor team explores U.S. election usage, ESG domain activity and DNS abuse trends.

In this report, we are pleased to provide a collection of articles about Q4 2020 topical registration activity, launch information, DNS abuse, .brand news and notes and industry meeting updates. This quarter, with our recent introduction of the Domains for Good program, our MarkMonitor team analyzes the Environmental, Social and Corporate Governance (ESG) TLD space.

We also review a new Interisle Consulting Group report on phishing, its implications for DNS abuse, the potential effect on new gTLD registration and blocking strategies. Additionally, we detail the ongoing use of .brand Top-Level Domains to keep you up-to-date on what is happening in the space. With 2020 nearly completed, new gTLD launch activity continues to occur and will push into 2021.

Download the 2020 Q4 New gTLD Quarterly Report to learn more. | 18-Dec-2020 20:31

The Next Green Initiative is Internet Sustainability

We are all aware of the pollution caused by burning coal and combusting oil. The results are obvious: exhaust spewing from vehicles, factories, and power plants. Many of us don't realize we are actively contributing to the unnecessary burning of energy (natural gas and coal in the US) to power the Internet. We wag our fingers at Internet Service Providers (ISPs) and data centers, but the fact is that our own organizations are wasting electricity every single hour out of ignorance or apathy. There is hope. The antidote to ignorance is education (keep reading!), and the antidote to their apathy is your passion.

To simplify the solution, it is important to understand that the foundation of the Internet is IPv4 addresses. These addresses enable information exchanges and connections between servers and Internet-enabled devices (phones, tablets, computers, etc.). When devices are retired or migrated to IPv6, IPv4 addresses become dormant (also called "sleeping addresses").

Like your brain at rest or a bear hibernating, these dormant addresses require power to exist — even without being actively used. Many organizations, especially those in the telecom, financial services, healthcare, and IoT spaces, are sitting on tens of thousands or more dormant IPv4 addresses. This is a big problem and an area of opportunity for those of us passionate about building a sustainable Internet.

The scary thing is that government regulation on ISPs won't impact the hundreds of millions of dormant IPv4 addresses. Furthermore, consumer pressure groups focused on data centers likewise ignore the waste of power required by allocated but unused IPv4 addresses. We need a system that incentives businesses with stocks of IPv4 addresses to reintroduce them to useful life, rather than drawing on our limited natural resources every hour of every day with no end in sight.

Think of what GoDaddy did for domain names in the early 2000s — democratizing a secure marketplace. No longer did businesses have to sit on their unused domain name inventory — they could reintroduce it to a marketplace for someone else to use it. We need to do something similar for allocated IPv4 addresses that are no longer in use.

Yes, ISPs play a big role in sustainable Internet. Data centers do as well. However, to ignore the role that you and your organization play is foolhardy. Let's reward businesses for drawing down their inventory of dormant IPv4 addresses by giving them a platform to lease them securely. Businesses should also recognize and react appropriately to the individuals who spearhead drawing down their power, wasting unused IPv4 inventory.

This collaboration will reduce the amount of allocated IPv6 addresses while simultaneously dramatically lowering the amount of dormant IPv4 addresses. If you can influence your organization to realize a new revenue stream of leasing their IPv4 addresses, we will have solved one of the major challenges of building a sustainable Internet. It is achievable, but it is up to each one of us to champion the reduction of sleeping IPv4 addresses at our businesses.

Written by Vincentas Grinius, CEO and Co-Founder at IPXO | 18-Dec-2020 00:47

The Government of Niue Launches Proceedings With ICANN to Reclaim Its .nu Top-Level Domain

The Government of Niue, a small island 2,400 kilometers northeast of New Zealand, launched proceedings today demanding a "redelegation" of its country code top-level domain, .nu, from the Internet Corporation for Assigned Names and Numbers (ICANN). Jack Kerr, who first broke the news in Business Insider, notes, "the .nu domain has never been in the hands of the Niuean people, with control currently resting with the Internet Foundation of Sweden (IIS), the body in charge of that country's .se space." Kerr adds that Niue is also pursuing a case in the Swedish legal system demanding tens of millions of dollars made from the sale of .nu domains. Pär Brumark, the Swedish domains expert who is leading a delegation on behalf of Niue to claim its space, tells BI, "This is the big one." | 16-Dec-2020 21:28

Understanding Broadband Oversubscription

It's common to hear that oversubscription is the cause of slow broadband — but what does that mean? Oversubscription comes into play in any network when the aggregate subscribed customer demand is greater than the available bandwidth.

The easiest way to understand the concept is with an example. Consider a passive optical fiber network where up to 32 homes share the same neighborhood fiber. In the most common GPON technology, the customers on one of these neighborhood nodes (called a PON) share a total of 2.4 gigabits of download data.

If an ISP sells a 100 Mbps download connection to 20 customers on a PON, then in aggregate, those customers could use as much as 2 gigabits of data, meaning there is still unsold capacity — meaning that each customer is guaranteed the full 100 Mbps connection inside the PON. However, if an ISP sells a gigabit connection to 20 customers, then 20 gigabits of potential customer usage have been pledged over the same 2.4-gigabit physical path. The ISP has sold more than eight times more capacity to customers than is physically available, and this particular PON has an oversubscription ratio of 8.

When people first hear about oversubscription, they are often aghast — they think an ISP has done something shady and is selling people more bandwidth than can be delivered. But in reality, an oversubscription ratio recognizes how people use bandwidth. It's highly likely in the example of selling gigabit connections that customers will always have access to their bandwidth.

ISPs understand how customers use bandwidth, and they can take advantage of the real behavior of customers in deciding oversubscription ratios. In this example, it's highly unlikely that any residential customer ever uses a full gigabit of bandwidth — because there is almost no place on the web that where a residential customer can connect at that speed.

But more importantly, a home subscribing to a gigabit connection mostly doesn't use most of the bandwidth they've purchased. A home isn't using much bandwidth when people are asleep or away from home. The residents of a gigabit home might spend the evening watching a few simultaneous videos and barely use any bandwidth. The ISP is banking on the normal behavior of its customers in determining a safe oversubscription ratio. ISPs have come to learn that households buying gigabit connections often don't use any more bandwidth than homes buying 100 Mbps connections — they just complete web transactions faster.

Even should bandwidth in this example PON ever get too busy, the issue is likely temporary. For example, if a few doctors lived in this neighborhood and were downloading big MRI files at the same time, the neighborhood might temporarily cross the 2.4-gigabit available bandwidth limit. Since transactions happen quickly for a gigabit customer, such an event would not likely last very long, and even when it was occurring, most residents in the PON wouldn't see a perceptible difference.

It is possible to badly oversubscribe a neighborhood. Anybody who uses a cable company for broadband can remember back a decade when broadband slowed to a crawl when homes started watching Netflix in the evening. The cable company networks were not designed for steady video streaming and were oversubscribing bandwidth by factors of 200 to one or higher. It became routine for the bandwidth demand for a neighborhood to significantly surpass network capacity, and the whole neighborhood experienced a slowdown. Since then, the cable companies have largely eliminated the problem by decreasing the number of households in a node.

As an aside, ISPs know they have to treat business neighborhoods differently. Businesses might engage in steady large bandwidth uses like connecting to multiple branches, using software platforms in the cloud, using cloud-based VoIP, etc. An oversubscription ratio that works in a residential neighborhood is likely to be far too high in some business neighborhoods.

To make the issue even more confusing, the sharing of bandwidth at the neighborhood level is only one place in a network where oversubscription comes into play. Any other place inside the ISP network where customer data is aggregated and combined will face the same oversubscription issue. The industry uses the term chokepoint to describe a place in a network where bandwidth can become a constraint. There is a minimum of three chokepoints in every ISP network, and there can be many more. Bandwidth can be choked in the neighborhood as described above, can be choked in the primary network routers that direct traffic, or choked on the path between the ISP and the Internet. If any chokepoint in an ISP network gets over-busy, then the ISP has oversubscribed the portion of the network feeding into the chokepoint.

Written by Doug Dawson, President at CCG Consulting | 15-Dec-2020 21:31

Revisiting APT1 IoCs with DNS and Subdomain Intelligence

Cyber espionage is a type of cyber attack that aims to steal sensitive and often classified information to gain an advantage over a company or government. The 2020 Data Breach Investigations Report (DBIR) revealed that several hundreds of incidents across industries in the previous year were motivated by espionage.

We zoom in on one cyber espionage group of threat actors believed to be responsible for dozens of security breaches. The group dubbed "APT1” or "Advanced Persistent Threat Group 1" is the most prolific and persistent APT group. They reportedly stole hundreds of terabytes of data and maintained access to victim networks for as long as 1,764 days.

While the group is believed inactive, their implant code was reused in 2018. Could the indicators of compromise (IoCs) of APT1 be reused, too? Are there APT1 patterns detected in currently active fully qualified domain names (FQDNs)?

APT1 IoCs and Trademarks

Cybersecurity professionals closely monitor APT groups, including APT1. In one report by Fireeye detailing such monitoring, we obtained several IoCs consisting of:

  • 88 domain names
  • 7 subdomains
  • 8 email addresses
  • 6 netblocks
  • 3 IP addresses

APT1 actors also tend to leave signatures in the weapons they use. For instance, the APT1 persona identified as "Ugly Gorilla," notably imprinted the initials "UG" in the FQDNs or subdomains. Some examples mentioned in the report are:

  • ug-opm[.]hugesoft[.]org
  • ug-co[.]hugesoft[.]org
  • ug-rj[.]arrowservice[.]net
  • ug-hst[.]msnhome[.]org

All of these subdomains are tagged "malicious" by VirusTotal.

Revisiting the APT1 IoCs

We used the following tools to revisit and discover more about the IoCs:

Domain Names and Associated IP Addresses

Of the 88 domain names publicly attributed to APT1, 28 remain active in the Domain Name System (DNS) as of 4 December 2020. Some of the domains were typosquats of legitimate companies, some of which are now the owners of the IoCs (likely as part of typosquatting protection strategies). These domains and their respective registrant organizations are:

  • arrowservice[.]net: Arrow Electronics, Inc.
  • mcafeepaying[.]com: McAfee LLC
  • msnhome[.]org: Microsoft Corporation
  • myyahoonews[.]com: Oath Inc.
  • yahoodaily[.]com: Oath Inc.

Of the remaining 23 APT1 domain IoCs, 19 were cited as "malicious" by VirusTotal and could already be blacklisted by most security systems. However, four of the domains are not tagged as such even if one is a CNN look-alike domain that cannot be attributed to the news organization.

The table below shows the four domains' corresponding IP addresses and whether they have been reported as malicious. We also retrieved their IP netblocks and checked if they are included in the publicly available IoCs reported by Fireeye.

Table 1: IoCs Not Tagged "Malicious"DomainIP AddressIP Tagged as Malicious?IP NetblockIP Netblock an IoC?cnndaily[.]net104[.]31[.]82[.]32No, but with 3 files communicating104[.]31[.]80[.]0 — 104[.]31[.]95[.]255Nocomrepair[.]net23[.]236[.]62[.]147Yes23[.]236[.]48[.]0 — 23[.]236[.]63[.]255Nodnsweb[.]org67[.]222[.]16[.]131No67[.]222[.]16[.]0 — 67[.]222[.]23[.]255Nouszzcs[.]com103[.]42[.]182[.]241No103[.]42[.]182[.]0 — 103[.]42[.]182[.]255No

Organizations may also want to revisit these IoCs and include them in their blacklists, as there is a possibility that they could be reused. The domain comrepair[.]net, for one, resolves to a malicious IP address.


We used the Domains and Subdomains Discovery tool to see if there are subdomains that contain Ugly Gorilla's signature. We used the string "ug-" and searched for subdomains containing the said text string. Some 590 subdomains that begin with the text string turned up, including the IoC ug-co[.]hugesoft[.]org.

Some of these subdomains could be innocent ones that only happen to begin with "ug-." However, they are worth looking into, especially since APT1 notoriously signed their FQDNs with the said text string.

The APT1 group had seemingly become inactive. However, that doesn't mean that they can't entrust the weapons in their arsenal to other cyber attack groups. In fact, they may have already done so with their code. Aside from gleaning insights from blacklist sites, it may also be a good idea for organizations to revisit the group's IoCs, check for recent suspicious activities, and uncover more domain and IP footprints. | 15-Dec-2020 20:15

DNS Oblivion

Technical development often comes in short, intense bursts, where a relatively stable technology becomes the subject of intense revision and evolution. The DNS is a classic example here. For many years this name resolution protocol just quietly toiled away. The protocol wasn't all that secure, and it wasn't totally reliable, but it worked well enough for the purposes we put it to. Even though its privacy was non-existent, and it leaked personal information like a sieve, we were largely unconcerned about the ramifications of this. Even when man-in-the-middle attacks that performed DNS substitution became so commonplace that we started to rely on such DNS lies and withholding, we were still largely unconcerned. Obviously, it took a bit of a shock to wake us up out of our collective torpor over the DNS protocol, and the Snowden revelations provided such a jolt.

Since that time, the IETF has taken some concerted action to make significant improvements in the privacy of a number of elements of the Internet operation, and the DNS protocol has been part of that overall effort. Work has progressed quickly on various approaches, particularly with the crucial stub-to-resolver link, where the end client's identity and the DNS name they are asking for are combined in the DNS query. The first set of responses was addressed by securing the channel used by the DNS, with DNS over TLS (DoT) and DNS over HTTPS (DoH) definitions that enclosed the DNS query and responses in a secure wrapper.

The problem with both DoH and DoT is that neither is all that satisfactory from a privacy standpoint. It is more of a compromise approach that poses a difficult question to me, as the end-user. If I have to disclose both my identity and what DNS queries I'm making, then is it better to share all this information with my ISP through their DNS resolver, or should I share such critical information with Cloudflare or Google or any other of the open DNS resolver operators? If I can't hide the fact that I am the end client making this query, who is least likely to compromise my privacy or abuse this privileged relationship? Let me restate this conundrum another way: If I have to compromise my privacy to a third party, which third party represents the least risk to me now and in the future? It's a tough question, and the best answer is not having to compromise your privacy at all.


A group at Princeton University, Annie Edmundson, Paul Schmitt and Nick Feamster, together with Allison Mankin from Salesforce, has come up with an approach to break through this uncomfortable compromise with an approach they called "Oblivious DNS" (written up as an Internet Draft, July 2018, also a paper).

The concept is delightfully simple. It is intended to prevent the recursive resolver from knowing both the identity of the endpoint stub resolver and the queries they are making. The stub takes the query name and encrypts it using a session key. Let's call this new query name k{qname}. The session key is encrypted using the public key of the target ODNS server, and appended to the encrypted query name. Let's call this PK{k}. The stub then appends the label of the oblivious DNS server domain (let's use .ODNS in this example). In the DNS the QNAME field consists of 4 sets of 63 bytes, limiting both the key size and encryption scheme used. For this reason, ODNS uses 16-byte AES session keys and encrypts the session keys using the Elliptic Curve Integrated Encryption Scheme (ECIES). Once the session key is encrypted, the resulting value takes up 44 bytes of the QNAME field. The stub passes this query for k{qname}_PK{k}.ODNS as a conventional DNS query to its recursive resolver. The recursive resolver is unaware of ODNS and treats the query as it would any other. The recursive resolver will then pass a query for this name to an authoritative server for .ODNS. To resolve this name, the ODNS authoritative server will decrypt the session key (as it has the matching private key) and then use this to decrypt the query name. It can then use a conventional recursive resolution procedure to resolve the original query name. The response is encrypted using the provided session key. The ODNS server will then respond to the recursive resolver with the encrypted query name in the query section and the encrypted answer section that it has just generated. Upon receipt of this response, the recursive resolver will pass this to the stub resolver. The stub resolver uses its session key to decrypt the response. (Figure 1)

Figure 1 – Oblivious DNS

In the ODNS approach, the ODNS stub resolver and the ODNS authoritative server collaborate to ensure that the recursive resolver is unaware of the actual query name. The recursive resolver is aware of the identity of the stub resolver, but not the query name. When the recursive resolver passes the query to an authoritative server for this 'special' domain, the authoritative server is aware of the query name (after decrypting it) but does not know the identity of the originating stub resolver, as the recursive resolver masks that information. When the authoritative server resolves this name, acting as a recursive resolver, other parts of the DNS may also be aware of the query name (although query name minimization could mitigate this to some extent), but they are only aware of the identity of the authoritative server, and not the first hop recursive resolver nor the stub resolver.

Caching at the recursive resolver part of the ODNS authoritative server will still function in the same way as conventional DNS caching in recursive resolvers. However, the recursive resolver that has been passed the encrypted query name should not cache the query and encrypted response.

ODNS is an intriguing approach to the problem, particularly in its use of existing recursive resolver infrastructure, but it's not the only approach out there at the moment.


A different approach to the same problem, namely of trying to prevent any agent outside of the client's own domain from being able to match a DNS query to an endpoint identity, has been used by "Oblivious DoH." It is slightly more involved than the original Oblivious DNS approach as it also uses an Oblivious Proxy in addition to an Oblivious Target, wrapping up the entire DNS transaction in an encrypted envelope. The Oblivious Target is, in DNS terms, just a recursive resolver, as there is no need to also act as an authoritative server because the steering of the query to the target is performed by DoH, rather than via the DNS itself.

An ODoH stub resolver uses DNS over HTTPS to pass queries to an Oblivious Proxy. The stub resolver takes the query, generates a session key (for the response), and encrypts both of these objects with the public key of the Oblivious Target. The HTTP wrapper includes the nomination of the target host and path in its query to the Oblivious Proxy. The proxy looks for the target host and path in the HTTP wrapper and sends the encrypted payload to the target host, again using DoH transport (to the URI https://targethost/targetpath). The Oblivious Target can decrypt the original DNS message using its private key, and it will then act as a recursive resolver to resolve the query name in a conventional manner. The target then encrypts the DNS response using the client-provided symmetric session key and passes it back to the Oblivious Proxy, who then passes the encrypted object back to the client. The client can use the session key to decrypt the response. The original DNS query is left intact in this approach, and a two-level wrapper is used such that the outer wrapper, HTTS, is used in a hop-by-hop manner, and the inner wrapping of the DNS query and response shields the payload from all bar the intended recipient.

Figure 2 – Oblivious DoH


The ODNS approach is constrained in its approach as it is attempting to use the existing DNS recursive resolver infrastructure and therefore has a limited repertoire as to how it can encrypt the original query. The ODoH structure is slightly different. The disassociation of query and end-client identity is made through deliberately breaking the stub-to-recursive DoH session into two sessions, stub-to-proxy and proxy-to-recursive. The proxy is only aware of the identity of the stub but not aware of the query (as the entirety of the query object is encrypted using a key derived from the public key of the target resolver). In contrast, the target resolver is aware of the query but is unaware of the identity of the stub resolver as it only is aware of the identity of the proxy. It is absolutely essential that the proxy and the target cannot collude in this arrangement. It's probably best they be operated by quite distinct entities.

The advantage of the ODoH approach is that it is not attempting to shoehorn the encrypted information into a conventional DNS query packet format. The disadvantage is that it requires two dedicated agents, the Oblivious Proxy and the Oblivious Target, qualified by a strict proviso that the proxy and target must be operated by distinct entities who have no possibility of colluding. One could imagine the Proxy function being replaced by a TLS pass-through function. Don't forget that the DNS query itself does not contain a requestor identity field. In the DNS, that information is provided by the outer layer so that all DNS queries are, by their very nature, anonymous. The rationale for not using a TLS pass-through and the deliberate breaking of the query process into two steps is to permit the client an additional element of capability to vary the target and disperse its queries over a number of ODOH target servers if it so desires.

The advantage of the ODNS approach is that it only requires the provisioning of a target domain name and an associated authoritative server function, so a single party can be used for the service. The issue with the ODNS approach is that it is evident to any onlooker of the stub-to-recursive component that the client is making an ODNS query, and the query type is visible, even if the query name is opaque. This could be readily addressed if the ODNS stub-to-recursive path used DoH rather than the conventional UDP/TCP transport for this hop.


It's not clear if either of these approaches will catch on, for some definition of what "catching on" means.

There is a strong aversion on the part of most users to playing with provided configurations for name resolution in their devices, so both of these approaches might look like unlikely contenders for the mainstream of the DNS name resolution infrastructure.

However, you also need to remember that these days applications have an increasing interest in protecting their users' privacy and are therefore interested in concealing their activity from other applications, from the host platform, from the local network, and any other third party at all! One can easily see as ODoH framework being used to support the name resolution requirements for an application where even the very existence of the application's name resolution operations has a requirement to be hidden. Doubtless such requirements exist out there, and it may well be that this becomes a conventional part of an application's toolset in the near future, lifting the application out of the common name infrastructure and creating highly customized and well-secured name frameworks for each application.

Is this application-level customization of the DNS and name resolution operations what we meant when we talked about "DNS Fragmentation?" I guess so, and I suspect we will see more in terms of pre-provisioning and selective privacy measures that take DoH further along a path of what one could characterize as some degree of application-level autonomy from the existing DNS infrastructure.

Written by Geoff Huston, Author & Chief Scientist at APNIC | 15-Dec-2020 19:54

Remediating U.S. 5G Global Supply Chain Security Engagement

For nearly the past four years, the Trump Administration has purported to treat 5G supply chain security through empty political gestures such as network equipment banning. The disinformation reached its absurd zenith subsequent to the election with the Q-Anon myth of the Kraken. (The Myth advanced by Trump attorneys asserted the long-deceased Hugo Chavez working with China was corrupting voting machine software to deprive Trump of another term.)

This inanity also resulted in the U.S. government largely refusing to participate and impeding the engagement of U.S. companies in major global industry activities over the past four years to develop and implement multiple 5G and virtualisation supply chain standards and certification methods. Indeed, these activities have also become ever more open, transparent, with due process and consensus-based — notwithstanding Congressional unfounded assertions otherwise. The result has left the nation embarrassed and damaged American integrity internationally while costing billions of dollars in unneeded equipment replacement bereft of any actual supply chain security requirements.

The good news is that the international work over the past few weeks demonstrates the continuing healthy evolution of the global 5G virtualisation supply chain security work items at the 3GPP SA Plenary among the hundreds of participating parties, together with future strategy work occurring in the ETSI NFV SEC development body.

An Update on Current 5G Virtualisation Supply Chain Security Work

It is network architecture and service virtualisation that is the revolutionary and most significant aspect of 5G. A comprehensive array of 5G supply chain security work was initiated in 2015 that was suggested by the National Security Common Criteria Community and ensued through innovative work in the principal responsible bodies — a combination of NFV SEC, 3GPP, and GSMA. (NFV SEC is one of the eight NFV Industry Specification Groups within ETSI and comprised of 122 companies worldwide.) The 5G supply chain security work took the form of open consensus virtualisation security assurance standards developed initially in NFV SEC and migrated to 3GPP — engaging multiple industry and government participants with the implementation and certification occurring through GSM Association oversight bodies and requirements.

In NFV SEC, the work proceeded as NFV (Network Functions Virtualisation) Security. In 3GPP and GSMA, the work proceeded under the acronyms SCAS (Security Assurance Specifications) and SECAM (Security Assurance Methodology), under the aegis of NESAS (Network Equipment Security Assurance Scheme). Several U.S. government agency branches have been cognizant, and OTD participated actively in a segment of the work. The FCC and most other U.S. government agencies steadfastly ignored the work and never participated. Indeed, the Commission's most recent supply chain order embarrassingly fails to even recognize the existence of five years of global supply chain security work within the industry's principal bodies.

The recent quarterly 3GPP SA#90 plenary was an opportunity to review the progress of all the 5G virtualisation security work items in the security group SA3. There are currently eight SCAS work items that cover the key components of the 5G virtualisation ecosystem, including innovative capabilities such as "virtualized network products” and a set of five enhanced building blocks that includes network slice authentication and authorization, and service communication proxies. The work is supported by 18 different vendors and service providers from Asia and Europe, including one from the U.S. All work is slated for finalization in 2021 as part of the 5G Rel. 17 ensemble.

The recent NFV SEC #178 meeting continued to shepherd 5G supply chain security work across multiple other bodies, treated both the above SA3 progress on NFV Infrastructure security assurance and testing, as well as an overview of the threat landscape from one of the leading European national security agencies.

Needed Remediation by the Biden Administration

As the American Electoral College formally cast its votes today to remove Trump from office in 37 days, the new Biden Administration should focus on establishing a Restoring American International Engagement initiative consisting of two prongs. First is to reinstate American commitment to the international telecommunication and trade treaty agreements and activities which the U.S. helped put in place and ratified. Second is to marshal American Federal and industry resources and leadership to engage in the venues and perfect the ongoing international 5G virtualisation supply chain security initiatives. These actions can then be followed by a knowledgeable imposition of fact-based network supply chain security requirements and processes rather than Kracken myths.

Today, the 5G global security bodies are open, transparent and consensus-based public-private venues where technically definitive work on 5G virtualisation supply chain security occurs. America has the participatory resources where the relevant U.S. government agencies such as NSA and OTD can and should be actively engaged with its counterparts, and where American companies and security organizations should be strongly supported to contribute and review the work — as the nation once did decades ago. Restoring American international engagement here is easily achievable and should be a priority for the new Administration.

Written by Anthony Rutkowski, Principal, Netmagic Associates LLC | 14-Dec-2020 18:53

International Law and Cyberspace: It's the "How", Stupid

The Internet has enhanced freedom of communication, ignored national borders, and removed time and space barriers. But the Internet sphere was never a law-free zone. Already ICANN's "Articles of Incorporation" (1998) constituted that the management of critical Internet resources has to take place within the frameworks of "applicable national and international law". And in 2015, all the 193 UN member states confirmed the general applicability of international law in cyberspace. Nevertheless, the issue is part of an ongoing international controversy.

The basic agreement is overshadowed by fundamental disagreements on the "How". The UN Charter, UN conventions on international humanitarian law and human rights, and many other universal legal instruments have been negotiated in the pre-digital age. Now, different parties have different interpretations in the digital age, how the existing legal instruments should be applied in today's interconnected world. Is hacking into foreign networks a "use of force", forbidden by Article 2.4 of the UN-Charter? And if yes, would such an attack trigger article 51, which defines the right of self-defense, and allow a "hack back"? Can "Cybersovereignty" be extended beyond national borders? Who decides on the "attribution" of a cyberattack? What about a "drone war" where people are killed using joysticks, networks, and facial recognition software? Should there be a moratorium or even a ban for Lethal Autonomous Weapon Systems (LAWS)? Are there mechanisms for the peaceful settlement of cyber disputes? How "digital mass surveillance" violates the human right to privacy? What is the role and the legal status of non-state actors, acting as curators for content control or proxy-hackers? Online theft of intellectual property is illegal, but what about state-sponsored online-espionage?

For years, those disputes and other controversial law related cybersecurity issues are on the agenda of the 1st Committee of the UN-General Assembly and its two sub-groups, the Group of Governmental Experts (GGE) and the Open-Ended Working Group (OEWG). To bring more light into the legal grey zones of cyberspace, the OEWG convened in December 2020 a series of multistakeholder expert seminars, starting on December 4, 2020, with a special session on "International Law". The Japanese Cyber Ambassador Takeshi Akahori and Prof. Dapo Akande from Oxford University co-chaired the meeting. Liis Vilhul, Marjetje Schaake, Harriet Mayniham, Shetal Kumar, Jan Neutze, Duncan Hollis, Tilman Rodenhäuser and others testified. It was an excellent high-level discussion among policymakers and legal experts. It confirmed the existing agreements, but it also reconfirmed the existing disagreements.

Capacity Building: What about some Cybersecuritybooks under the Christmas Tree?

There is no shortage of expert knowledge about the threats in the digital world. The risks of the militarization of cyberspace are well known. As some speakers outlined, more than 60 countries have now developed offensive cyber-capabilities. However, governments are far from a consensus, creating a legal framework, and minimizing the risks for a digital disaster. Even shocks like COVID-19 seem to produce more intergovernmental controversies, not less. Experts have no problems agreeing that attacking data centers of hospitals, medical research institutes or supply chains for vaccines is unacceptable and should be treated as an illegal action. However, such attacks are taking place without any consequences.

Do diplomats and policymakers really understand the threats of escalating cyberattacks and their cascading side effects? Could enhanced capacity building, as discussed in the OEWG, help limit risky behavior in cyberspace?

The short answer to the second question is probably "yes". Capacity building is a good idea. Nobody can refuse it. It can help to build trust among adversaries. But it also needs a political will.

Enhanced legal knowledge is available. We have the "Global Forum on Cyberexpertise" (GFCE), databanks, archives and many academic books. And we do have the Tallin Manual 2.0, something like the "Cybersecurity Bible". Reading books — in particular between Christmas and New Years' Eve — makes a lot of sense. Wouldn't it be a good idea to put some new books under the 2020 Christmas Trees? Here are four recommendations from the 2020 edition:

Francois Delerue: Cyber Operations and International Law

The book "Cyber Operations and International Law", published by Cambridge University Press, was written by Francois Delerue, a researcher from the Institut for Strategic Research of the Paris-based Ecole Militaire (ISREM). It offers a comprehensive analysis and a systematic examination of attribution, lawfulness, and remedies regarding the cyber activities of state and non-state actors. He makes clear that in the 2020s, the militarization of the Internet is a fact, and "cyberspace is considered to be another domain for military activities".

According to Delerue, state-sponsored cyber operations take "a mosaic of forms and serve an array of purposes". But he also argues that cyberwarfare is often in the center of the public discussion and is not only ill-defined but also just the "tip of an iceberg". The majority of state-sponsored cyber activities is below the threshold of cyberwarfare. They do not produce "death and destruction" in enemy states, but they can create chaos and confusion in societies.

Delerue recommends looking beyond the "prohibition of the use of force" principle and analyzes deeper consequences of the violation of other jus cogens principles of the UN-Charter as territorial sovereignty or the principle of non-intervention. Delerue argues that "international law does not leave States helpless against cyber operations, even when the right to self-defense cannot be invoked. He makes it clear that "the perpetrating State has to provide full reparation for the damage caused by its cyber operations." He analyzes states' responsibility if their territory is used for the transit or launch of cyber operations by third parties.

In this context, he offers a very useful concept for the controversial issue of "attribution". He distinguishes between "attribution to a machine, to a human and to a State" and proposes a variety of specific procedures and how to identify and react to unfriendly actions. Delerue also makes clear that there is a distinction between state-sponsored cyber operations and cybercrime. "State cybersecurity and private cybersecurity are covered by two different legal frameworks".

Matthias Kettemann: The Normative Order oft he Internet

Kettemann is with the Leibnitz Institut in Hamburg. His book is published by Oxford University Press. He tries to "decomplexify and demystify" Internet regulation and offers a "sophisticated multilayered model of a comprehensive and nuanced regulatory order" between "utopian ideals" and "technocratic pessimism". He says that there is no "Grundnorm" within the Internet Governance Ecosystem. The legal framework for the Internet is "hybrid in nature" and consists of several interconnected layers. It is a complex of norms, values and practices that relate to the use and development of the Internet.

He discusses Lawrence Lessig's the "Code-is-Law-Slogan" and concludes that "code does not just appear, it is written in processes (that can be regulated) by coders who can be subjected to norms, employed by companies with values and targets to be debated in public forums, with aims and functions that can be measured against the finalities of the normative order of the Internet. And he concludes that "protocols therefore have politics" and norms need to be consistently applied to their development and implementation". This finding, he adds, also applies "to algorithms and algorithmic decision-making, including selection and recommendations logics that have clear implications for rights and freedoms". He supports the multistakeholder model but recognizes that this model — as it stands now in 2020 — "suffers from substantial conceptual deficits."

In his summary, he states: "The rule on (and of) the Internet must protect rights and values online (the Internet's nomos), legitimize the exercise of private and public authority (through stabilizing the nomos normatively and through narratives) and ensure a fair distribution of basic goods and rights as they relate to the Internet, including Internet access ann access to Internet content."

Niel ten Oever: Wired Norms

Niels ten Oever has worked for many years with the human rights organization "Article 19". His "Wired Norms: Inscription, resistance and subversion in the governance of the Internet infrastructure" is based on his dissertation he defended in summer 2020 at the University of Amsterdam. He analyses the interrelationship between technical arrangements and legal norms, particularly in human rights. He looks into policies and practices of three technical organizations — ICANN, IETF and the Regional Internet Registries (RIRs) — and identifies frictions between the multilateral Internet Governance regime, which regulate public policy issues (as privacy or information content) and self-regulatory multistakeholder and private Internet governance regimes, which are dealing with technical issues (as Internet protocols, standards, domain names and IP addresses).

He concludes that one should not see this friction as a "structural misalignment" but as "mutually beneficial". While states may not want to focus on the interconnection and innovation of technologies, transnational corporations do not need or want to develop their own policies and standards vis-a-vis social and legal norms. He argues for a "wiring of norms" and hopes that cross-pollination between the two regulatory worlds could produce "alternative routes to govern the Internet."

Dennis Broeders & Bibi van den Berg: Governeing Cyberspace

"Governing Cyberspace: Behaviour, Power and Diplomacy", published by Rowman & Littlefield in 2020, is based on papers presented at a conference on responsible behavior in cyberspace in November 2018 in The Hague. It includes papers like "Electoral Cyber Interference, Self-Determination and the Principle of Non-Intervention in Cyberspace" (Nicholas Tsagourias), "Violation of Territorial Sovereignty in Cyberspace" (Przemyslaw Roguski), the Multistakeholder Model on Internet Governance (Jacqueline Eggenschwiler & Joanna Kulesza) and on cyber activities of China (Rogier Cremers), Russia (Xymena Korwoska) and NATO (Steven Hill & Nadia Marsan).

Alexander Klimburg and Louk Faesen argue in their paper "A Balance of Power in Cyberspace" in favor of a "holistic approach". The Internet has linked cybersecurity issues, digital economy, human rights and technology development (as AI or IoT) in a new way, which has consequences for all kinds of global diplomatic negotiations. They see in the United Nations and the first three committees of the UN General Assembly an already existing political mechanism for such a "holistic approach" to develop regulatory frameworks for cyberspace and digital cooperation.

Klimburg & Faesen use the "balance of power theory" to remember that a realistic approach to stability and international order needs compromises that will give all parties the same "relative security and relative insecurity". Stability in cyberspace "hinges upon the acceptance of the framework of the international order by all major powers, at least to the extent that no state is so dissatisfied that it expresses it in a revolutionary foreign policy." They describe this as a challenge to find solutions based on the "recognition of the limits" by the states with regard to the "technical reality of the domain inhibiting one party from deciding universally and unilaterally, arguably defined as the multistakeholder reality in the context of cyberspace." Balancing states' interests in cyberspace are crucial. The holistic approach could be the start of a new beginning in creating stable and peaceful cyberspace.

Looking forward towards 2025

To have academic expertise is very good. But time is ripe now for a governmental positioning. A number of states — Finland, New Zealand, France and Estonia — have recently published their legal opinion about the applicability of international law in cyberspace.

The issue of the use of force and countermeasures in cyberspace is one of the key problems. New Zealand published its paper on the eve of the OEWG seminar series on December 1, 2020. It expressed its willingness to explore collective countermeasures in the "collective interest in the observance of international law," citing the "potential asymmetry between malicious and victim states." It says that "state cyber activity will amount to a use of force if it results in effects of a scale and nature equivalent to those caused by kinetic activity which constitutes a use of force at international law. Such effects may include death, serious injury to persons, or significant damage to the victim state's objects and/or state functioning". Cyberattacks against hospitals could be such a case. Estonia maintains a similar position, France recently rejected collective countermeasures, while Finland has avoided the matter altogether. In the OEWG seminar, the question was discussed, whether the publication of "national papers" is useful or could have counter-productive effects, allowing "silent governments" to move away from globally accepted norms.

There were a lot of references to the so-called "like-minded countries." If they agree, this will set the first standard for global arrangements. It is undoubtedly true that it is much easier to agree among governments that share the same values. However, we live in a divided world where different value systems co-exist. In this divided world, we have one Internet. There is no alternative to the complicated and burdensome process to sit together and figure out how arrangements can be made among partners that are also competitors and adversaries in an interconnected world. Probably a legal opinion of the International Law Commission would be helpful to agree on something like a globally accepted "framework of interpretation."

But in any case, it will take some time to make progress. Nevertheless, there are some encouraging signs. The fact that the UN becomes more and more a place where not only governments but also non-state actors discuss highly politicized issues as cybersecurity is an interesting step forward towards a new culture of global policy development. The extension of the OEWG mandate by the 75th UN-General Assembly until 2025 is another interesting signal.

The new UN resolution on an extended OEWG calls for enhanced multistakeholder discussions. It recommends that the new OEWG should not only facilitate "the exchange of views among States on specific issues related to its mandate." Still, it may also decide "to interact, as appropriate, with other interested parties, including businesses, non-governmental organizations and academia." But again, it is the "How" which is the problem. How will non-state actors become involved? What is "appropriate"? And how will governments take ideas from non-state actors on board? With the Paris Call, the Tech Accord, and the Final Report of the Global Commission on Stability in Cyberspace, there are already good examples of multistakeholder cooperation in developing cyber norms on the table. The next opportunity is to move forward and propose some innovative procedures for future interactions among state and non-state actors in cybersecurity in the forthcoming OEWG meeting in March 2021.

Written by Wolfgang Kleinwächter, Professor Emeritus at the University of Aarhus | 10-Dec-2020 21:43

Can We Advance Policies Towards a Safe Transnational Internet Market for Medicines?

Note: This article focuses on key points raised during Workshop #116 of the UN's 2020 Internet Governance Forum: "Pandemics & Access to Medicines: A 2020 Assessment". For a more in-depth review of the themes noted below, please watch the complete video of the IGF Workshop here.

Co-written by Mark Datysgeld and Ron Andruff

As 2020 draws to a close, it becomes possible to assess the trends from different policy areas that were most impacted by the global pandemic, with health-related policies rising to the top of that list. This article focuses on the sale of medicines using the Internet, as it should, without a doubt, be one of the leading concerns of both the general public and policymakers. There is a disconcerting lack of broader social debate around the subject, even though the need is so great.

As discussed in a previous Circle ID article, had there been proven effective treatments to prevent or fight COVID-19 from the get-go, the rush to buy them would have been unparalleled. What would guarantee access to the medicines? Which rules would govern their sale? Would citizens be limited to buying from potentially understocked home markets or would exemptions be put in place to enable them to import from trusted foreign suppliers? These are questions that remain open, with no clear norms having been advanced within the United Nations environment or elsewhere to address them.

This results in a digital market that neither fully exists, nor does it not exist. In this space, good actors are hampered from pursuing safe, innovative strategies to drive global competition, yet the lack of established overarching norms ends up allowing rogue actors to have the freedom to cause significant harm the world over. Rogue actors thrive on the loopholes found across various jurisdictions, and exploit the information asymmetry around medicines, their cost, and their efficacy; overwhelming the online market with sub-standard products.

Maintaining this status quo contributes significantly to an estimated 8 million individuals dying each year from preventable causes, which roughly corresponds to one out of five or six deaths in the world. Meanwhile, an estimated 2 billion people lack access to essential medicines, generating needless suffering, disabilities, and a shortening of life expectancy. These assessments predate the COVID-19 pandemic.

Today, society remains at a loss as to what a "new normal" will ideally look like, though we collectively understand that there is a need to move toward a set of general agreements that, were something of this nature to happen again, would spare us from having our entire social fabric torn apart once more. At the end of the day, health has proven to be the one factor that truly forces the hands of all global markets and societies, more than any of the many financial crises the world has experienced over the past century.

In fairness, it's not as if there is no reason for this collective inaction to date. Bertrand de la Chapelle raised the point during "Pandemics & Access to Medicines: A 2020 Assessment” of just how regulated the medicines market have been at the nation state level, due to what he identifies as a triangle that needs to be balanced when we confront jurisdictional challenges of this magnitude: Human Rights, public safety (which can be considered as an extension of Human Rights under certain contexts), and economic concerns.

From the Human Rights perspective, the outlook is quite clear: no person should die from diseases that have a treatment. This is not a difficult point to conceptualize and we can assume there to be a more or less general agreement on this within society. How to actually make this happen is an entirely different matter; but as far as ideals go, we can accept this vision as the default.

However, the question of public safety is a real one. Substandard and falsified medicines (find out about the differences here) pose a threat to our collective well-being, with the World Health Organization (WHO) estimating that 1 in 10 medical products in low- and middle-income countries belongs to this category. That is the same as saying that roughly 10% of the people in these countries end up with medical products that may harm or kill them instead of helping treat their ailments. The need for strict control over which are the safe medicines, including the chain of custody, is real.

The question of economic concerns remains the murkiest one, as there are several variables to take into account. The global pharmaceutical industry invests a great deal of resources in the discovery of drugs, their clinical trials, and end up absorbing the losses that come with failed medicines. However, it should be equally noted that these investments are often significantly offset through foundational research performed by universities and public health institutions, which themselves contribute a great deal towards medical innovation. It is also consistent that, in pricing medicines, this industry targets each market to obtain as much profit as a region's system and economy allows, with the price of many medicines varying widely in different (often neighboring) countries.

The pharmaceutical industry should not work pro bono. They should reap the just rewards for what is certainly a lot of extenuating, complex work. But the reality is that medications make upwards of 70% of health care costs in developing countries, which is not an acceptable situation. The pricing practices that are in place today do not correspond to rewarding the risks incurred in drug development, outweighing those in the long run. A free market in which a medicine's manufacturing quality could be attested according to global standards and trusted by consumers would be more in line with the immediate need that exists for greater access to these life-saving products.

There is need to stress the situation of medicines in developing nations, since, as pointed out by MENA region health expert Zina Hany, their governments often spend a substantial portion of the available overall budget on healthcare, and yet, unfortunately, their systems comparatively lag behind in efficiency due to high maintenance expenses. By way of example, drug prices in the USA may vary by as much as hundreds of dollars in relation to its neighbor Canada, but this pattern can also be observed in the MENA region, where even within a context of less regional wealth there is still significant price differentiation being carried out.

When these matters are considered in the whole, there is a key question posed by Dr. Aria Ilyad Ahmad that is the next logical step in the discussion: who would be a legitimate convener for the dialogue to arrive at these much-needed norms? From an objective standpoint, the WHO would seem to be the appropriate venue, but the fact of the matter is that practical knowledge about the technical aspects of the Internet and its operation, multistakeholder processes, and other knowledge in this vein might be less than desired within the institution.

Could the Internet Governance ecosystem be the space to advance this theme? Possibly. It is clear that this discussion involves both issues with an impact on the Internet's technical infrastructure and around that infrastructure. Technical bodies, jurisdiction and policy-focused institutions would all need to engage in the effort of generating comprehensive norms in order to achieve the best possible outcomes. Bringing in exogenous actors and making an effort to educate and establish dialogue with them would also go a long way towards achieving progress in an intricate area such as health.

The IGF itself has served as a neutral forum where such conversations can be (and have been) advanced. But now there is need for a permanent locus where the intersection between medicine and the Internet can be addressed, and discussions can be advanced at a pace that reflects the speed with which this recurring theme is intensifying in importance. Looking toward the future, the IGF Plus project would seem to couple well with this goal and even though it is not entirely yet clear how that body will be structured, it remains a potential home for the development of recommendations for global norms.

One arena within which these questions can be legitimately advanced is ICANN, where work that has been done at the Internet & Jurisdiction Policy Network and within the Domain Name Association has been reflected in the voluntary DNS Abuse Framework, which incorporates provisions to tackle websites dedicated to the sale of illegal opioids. While it would not necessarily be a "silver bullet," the further deepening of the relationship between the global policymaking community and the ICANN contracted parties would facilitate the creation of more objective norms regarding the sale of legitimate medicines using the Internet, while at the same time combating rogue actors. This would be a valuable start to enable ground rules to be set.

Research into these subjects is ongoing and, in some ways, still incipient. The production of evidence-based material to enable decision-making in relation to the establishment of a safe transnational Internet market for medicines started to really emerge in the 2010s, and has been picking up strength ever since. The time is ripe for the IG community to pay more attention to this emergent issue, and to make it more present in our collective efforts to improve the Internet space and further its role as a tool to generate well-being and decrease inequalities.

There is a cost to inaction that may be hard to perceive, but it is there and it is growing. The more delay there is in finding a way forward, the farther away the policymaking community will be from being able to agree on norms that can generate common good and further this cause in a meaningful way. First of all, though, is understanding that there is a very real role for the Internet Governance community in generating this change.

Written by Mark Datysgeld, GNSO Councilor at ICANN | 09-Dec-2020 21:13

Are There any Cable Companies Left?

Are there any companies left that we can still call cable companies? Everything in the business press still refers to Comcast and Charter as cable companies and AT&T and Verizon as telephone companies. It's getting harder to justify using these traditional labels, and maybe the time is finally here to just start calling them all ISPs.

After all, these four companies collectively have 80 million broadband customers, meaning these four ISPs now have around 73% of all broadband customers in the country. They also have about 73% of all traditional cable customers, at 58 million, but that number has been tumbling and is down from 64 million just a year ago. It was only a few years ago where the broadband and cable TV markets crossed and broadband became the predominant product for these companies — but since then, the gap is growing quickly between the two product lines.

FierceVideo published an article in September that interviewed the CEOs of Comcast, Charter, and AT&T that asked each their views on the future of cable TV. Their responses are not surprising in an industry where traditional cable subscribers are shrinking quickly.

Brian Roberts of Comcast said he is "indifferent" to having customers on traditional cable TV or in Comcast's Flex product that is free and ad-supported. And that doesn't even count in the 14 million people who are now watching Comcast's online Peacock service. Comcast sees all video products as important in making Comcast's broadband customers stickier. AT&T's John Stankey said something similar. He says he values the traditional cable TV product but that the company is betting on the online offerings like AT&T TV and HBO Max. Charter is the only large company still on the traditional track, and the company added cable customers in the second quarter of this year. But Charter CEO Tom Rutledge foresees growth coming to an end since the company feels obligated to pass video content rate increases on to cable customers.

Both Comcast and Charter have made up some of the loss in cable customers by launching a successful cellular product. At the end of the third quarter this year, Comcast had 2.6 million cellular customers, and Charter had grown to 2 million. Both companies will be working to increase the profit margins of the cellular product by shifting traffic from resold cellular to company-owned small cell sites. Both companies have a built-in advantage in that they already own fiber deep into neighborhoods, so both should be able to deploy cellular small cells without having to lease transport. I find it interesting that these two traditional cable companies seem to be doing a better job of bunding in cellular service than was ever done by AT&T and Verizon — those two companies never seemed to find a way to do that.

In my writing about the industry, I have lately been referring to these big companies as ISPs or incumbents because the terms cable company and telephone company seem to have lost relevance. It's becoming hard to distinguish between Comcast and AT&T in markets where AT&T is competing against Comcast using gigabit fiber.

I'm at a loss to explain why the industry continues to call Comcast a cable company. The percentage of revenue that comes from cable TV is dropping quickly, and the share of margin from cable is dropping even faster. The amount of money that AT&T makes from traditional telephone service is so small that it's a challenge to even find the word telephone in the company's financial report. But I guess old habits are hard to break. We instantly know who is being referred to when somebody says "large cable companies" or "large telcos." But I'm still looking forward to a time when these monikers are so rare that we'll have to explain what they mean to children.

Written by Doug Dawson, President at CCG Consulting | 09-Dec-2020 20:45

A New Privacy-Focused DNS Protocol Released Called Oblivious

Cloudflare and Apple, along with Fastly, on Tuesday announced a new proposed DNS standard that separates IP addresses from queries preventing an entity from seeing both at the same time. The protocol called Oblivious DNS over HTTPS, or ODoH, is open source and available for anyone to try it out or run their own ODoH service. The team from Cloudflare explains: "ODoH works by adding a layer of public key encryption, as well as a network proxy between clients and DoH servers such as The combination of these two added elements guarantees that only the user has access to both the DNS messages and their own IP address at the same time." (Learn more: Cloudflare's announcement, Internet-Draft, the Paper) | 09-Dec-2020 20:23

RSS and Atom feeds and forum posts belong to their respective owners.