Facebook Is Right to Call the Australian Government Bluff

As mentioned in previous analyses, the way that the Government has approached its battle with the digital giants has been flawed from the beginning.

True, its tough stand had made Google pay media companies well above what these companies would have been able to negotiate individually with Google, but the fundamentals of why these battles are taking place are still unchanged.

Google was prepared to pay these 'premiums' to make sure that its business model would still survive. It is the company's advertising business model that it was keen to protect, and for that reason, it was prepared to pay off the news companies. So nothing fundamental has been solved by the Australian Government through its media code. It is now simply waiting for the next battle, and the regulator (ACCC) has also already foreshadowed that it will concentrate on that advertising business model. This will be a much tougher battle that Australia will not be able to win on its own. Google will use its full legal power with gigantic financial resources to defend its business.

It also shows that actions from individual governments are counterproductive. The French, who took a different approach, got only a fraction of the money for its media companies than that Google has paid to Australian media, so how will that make the French feel. Only united action against global digital moguls will lead to structural changes, and I have mentioned some of such structural changes as proposed by the EU: Can we control the Digital Platforms.

Now on to Facebook. I totally agree with Facebook that the Government's action in relation to the way that Facebook distributes news is out of all proportions and, as a matter of fact, totally wrong.

All news organizations around the world totally voluntarily distribute their news to whoever wants to use it. Facebook is not involved in this at all. Unlike Google, it doesn't abstract content; it doesn't create news snippets, and it does not distribute links.

All of this is up to the news companies who are providing their services via Facebook. It is totally up to them if they provide full articles, snippets, links, send users to paywalls, etc.

It is true that all the information that Facebook now blocks can be obtained elsewhere. However, Facebook is such a well-known, integrated platform used by most Australians that it will be the organizations who provide services on the platform and who are now blocked, who are the ones who suffer from this action.

I would think that common sense here will prevail and that the Government will limit the media code to those digital companies that are actively making money from the content of others. Unlike Google the media code doesn't really effect their business model, so there was no need for them to negotiate as there was, as a matter of fact nothing to negotiate.

If the Government wants to stick to its media code, it will also have to make Twitter, LinkedIn and others pay for the same service that Facebook provides. You could even argue that telephone and postal service which are used to distribute news should fall under that code, of course totally ridiculous.

It is also in the Government's own interest that it can continue to use the Facebook platform to distribute its own news. Once again, there are other ways to do that, but the reach of Facebook is unsurpassed and, as such, very valuable for the distribution of such information.

Do I let Facebook off the hook? Totally not, but if we want to get control over the digital media and avoid the damage they are doing to our society, economy, and democracy, we need to be far more strategic. We globally will need to work together on those issues.

Written by Paul Budde, Managing Director of Paul Budde Communication | 18-Feb-2021 21:32

PIR Launches New Institute to Combat DNS Abuse

Public Interest Registry (PIR), the non-profit operator of the .org top-level domain, today launched the DNS Abuse Institute, a centralized effort to combat DNS Abuse. In its news release, PIR said the Institute "will bring together leaders in the anti-abuse space to fund research, publish recommended practices, share data, and provide tools to identify and report DNS Abuse." More from today's announcement:

— "As part of this initiative, the Institute is forming an advisory council with expert representation from interested stakeholders across issues related to DNS Abuse."

— "The DNS Abuse Institute will focus on three foundational areas—innovation, collaboration and education."

— The Institute will hold its first forum this spring featuring anti-abuse experts – State of DNS Abuse: Trends from the last three years and current landscape, Tuesday, March 16, 2021 at 16:00-17:30 UTC | 17-Feb-2021 19:48

An Institute to Combat DNS Abuse

Co-authored by Brian Cimbolic, Vice President, General Counsel at PIR and Graeme Bunton, Director, DNS Abuse Institute.

Over the last few years, it's become clear that abuse of the Domain Name System — whether in the form of malware, botnets, phishing, pharming, or spam — threatens to undermine trust in the Internet. At Public Interest Registry, we believe that every new .ORG makes the world a better place. That means anything that gets in the way of that is a threat, and that includes DNS Abuse.

Fighting DNS Abuse is a fundamental part of PIR's mission as an exemplary non-profit registry. PIR hopes to build off the foundation of the DNS Abuse Framework, which it helped create. PIR is taking these efforts to the next level, and we have announced the creation of the DNS Abuse Institute. The Institute is charged with creating initiatives that will establish recommended practices, foster collaboration, and develop industry-wide solutions to combating DNS Abuse.

We know that PIR cannot eradicate DNS Abuse single-handedly, but efforts such as this new Institute can make a significant impact across the DNS. PIR may have created the Institute, but the Institute will support the entire DNS community. We are actively building an advisory council to guide the organization's efforts in a way that reflects the diverse interests involved in combating DNS Abuse.

The first step will be to bring people together. Our intention is to host forums where we can convene leaders in the space to better understand the biggest challenges the community faces. The first of these will be held on March 16, 2021, and we encourage everyone committed to combating DNS Abuse to join us.

We know that the more we gather, the more we will all learn from each other, and the focus of the Institute will evolve and grow.

But as a starting point, we are focused on three core areas:

  • Driving innovation in the DNS. The Institute will create recommended practices to address DNS Abuse with solutions for registries and registrars of various sizes and resources, provide funding to qualified parties to conduct innovative research on cybersecurity and DNS Abuse related issues, and develop practical solutions to combat DNS Abuse.
  • Serving as a resource for interested stakeholders. This includes maintaining a resource library of existing information and practices regarding DNS Abuse identification and mitigation, promulgating abuse reporting standards (e.g., what is needed for a "good" notification on abuse), and publishing academic papers and case studies on DNS Abuse.
  • Building a networking forum and a central sharing point across stakeholders. Collaboration with technical and academic organizations that work on DNS Abuse issues, registries, registrars, and security researchers will help enable the entire DNS to be better equipped to fight DNS Abuse.

No single organization has all the answers. From the outset, we intend to work closely with other organizations in the anti-abuse space, including technical organizations, and thought leading organizations, like the Internet and Jurisdiction Policy Network.

If you have ideas about how to combat DNS Abuse, we urge you to reach out and to join the conversation. We look forward to discussing the important questions surrounding DNS Abuse with stakeholders from across the DNS community.

We understand the challenge in front of us. We look forward to taking it on alongside so many others in our community. We hope you'll join us.

Written by Graeme Bunton, Director, DNS Abuse Institute | 17-Feb-2021 19:11

SpaceX Starlink Is Coming to Low-Income Nations

I expect that many Starlink customers in low-income nations will be organizations in which connections are shared.

The introduction of the Internet into a community will have unanticipated side-effects on the community and the individuals in it.

Early Indian VSAT terminal

Beta testers in the US and Canada paid $500 for a terminal and are paying $99 per month for the service. The beta tests began in high-income countries, but SpaceX is beginning to roll Starlink out and will include low-income nations, for example, India.

Last September, SpaceX responded to a request for consultation on a roadmap to promote broadband connectivity and enhanced broadband speed from TRAI, the Telecommunications Regulatory Authority of India. In response, SpaceX made several recommendations that would enable them to quickly begin service in India. The talks between SpaceX and TRAI must be going well because, on November 2, Elon Musk tweeted that they would be operating in India "As soon as we get regulatory approval. Hopefully, around the middle of next year".

Musk is famously optimistic, but let's assume they are authorized by TRAI — will rural Indian consumers be able to afford the price SpaceX is charging in the US? For now, the price of Starlink service will be the same in every nation, but that may change if they find they have excess capacity after more satellites are launched.

Loading Pikangikum terminals
(from 5-minute video)

Regardless, many Starlink customers in low-income nations will be organizations in which connections are shared rather than individual homes. We already see such examples among the current beta testers. One beta site is the Pikangikum First Nation, a 3,000-person indigenous community in remote Northwestern Ontario, Canada where Starlink is serving community buildings and businesses as well as residences. Other Starlink beta testers are Allen Township, outside of Marysville, Ohio, and the Ector County, Texas, and Wise County, Virginia school districts which are installing Starlink terminals in student homes.

I expect that many Indian installations will be like these — serving community organizations, clinics, schools, businesses, telecenters, etc. rather than consumers in their homes. Of necessity, low-income nations have a long history of shared Internet resources and India is no exception. My colleagues and I found Internet kiosks and telecenters in India in the early days of the Internet, click here and here, and in other low-income nations. For a richly illustrated global tour of early telecenters and their applications and impact, click here. (Jim Cashel has suggested that SpaceX should focus on schools).

SpaceX CEO Elon Musk has quipped that setting up a Starlink connection is as simple as pointing the terminal at the sky and plugging it in and that seems to have been close to true for the Pikangikum community. The Pikangikum installation was spearheaded by FSET Information Technology. FSET delivered the terminals and installed the first 15 then community members took over and installed 45 more.

But what about more difficult installations? A casual perusal of the Starlink discussion on Reddit shows that some users have to build creative mounts to connect in wooded or otherwise obstructed areas. Skill is needed for installations in areas where there is no clear view of the sky or some local networking is used to share satellite terminals.

Bill Clinton and Al Gore at
Netday96 (Source)

Again, of necessity, technical improvisation by citizens is common in low-income nations. Think of community "street nets" which may be small like this one in Gaspar, Cuba, or large like SNET in Havana. Building networks like these requires technical skill, tools, and supplies, and there may not be an FSET around to help. SpaceX and other constellation operators will need to support rural communities by providing online and in-person training and a marketplace for tools and supplies. The Sun Microsystems Netday initiative for installing local area networks in schools provides an early, successful example of this sort of vendor support of community networking.

Of course installing terminals is just the tip of the iceberg. A community or organization network must be financed and users trained. Again, the constellation operator should play a supporting role. Note that the Musk Foundation has just made a "significant" contribution to Giga in furtherance of their goal of connecting every school to the Internet. I don't know anything about the terms of the grant — whether it is cash or subsidy — but since terminals are expensive and SpaceX is selling them at a loss, perhaps the schools will receive free service. That would cost SpaceX essentially nothing as long as the school was at a location with unused capacity and it could be phased out over time — something like the National Science Foundation phasing out university connections in the early days of the Internet.

Since Teledesic in the 1990s, prospective constellation operators have promised that Internet connectivity would improve the health, education, and economy of unserved regions and entertain the residents as well, but we are no longer naive and have learned that there may be negative social and personal side effects. For example, in 2011 only 1% of individuals in Myanmar were Internet users. Myanmar privatized mobile connectivity in 2013 and the first international link was activated in March 2014. In June 2014 Aljazeera was asking whether Facebook was amplifying hate speech against the Rohinga.

On a lighter note, I'd hate to see all the Pikangikum teenagers hooked on video games. The introduction of the Internet will have unanticipated side-effects on the community and the individuals in it. It's an opportunity for a 21st-century Margaret Mead to live among the Pikangikum and other communities to observe the changes.

Written by Larry Press, Professor of Information Systems at California State University | 17-Feb-2021 00:10

SolarWinds Cyber Intel Analysis Part 2: A Look at Additional CISA-Published IoCs

A few weeks back, we added unpublicized artifacts to the list of indicators of compromise (IoCs) published by both FireEye and Open Source Context back in December 2020. Some would have thought that would put a stop to the havoc the SolarWinds threat actors have been wreaking, but the group targeted Malwarebytes just recently according to a company report.

As we have done before, this post sought to expand the list of published additional IoCs using a variety of domain and IP intelligence tools.

Additional IoCs According to CISA

Apart from some IoCs publicized in December 2020, the Cybersecurity & Infrastructure Security Agency (CISA) published the following additional data points, among others, on 6 January 2021:

  • Three newly discovered domains (i.e., ervsystem[.]com, infinitysoftwares[.]com, and mobilnweb[.]com)
  • Two IP addresses that two of the newly discovered domains resolved to (i.e., 107[.]152[.]35[.]77 — infinitysoftwares[.]com and 198.12.75[.]112 — ervsystem[.]com)
What We Discovered about the Additional IoCs Additional Domain List Expansion

We subjected the three domain additions to WHOIS lookups (including WHOIS history) and found that:

  • The domains were, like the previously publicized ones, relatively aged, ranging from 1 — 3 years old. Ervsystem[.]com was registered on 4 February 2018, infinitysoftwares[.]com on 28 January 2019, and mobilnweb[.]com on 28 September 2019. This strategy could be part of the attackers' attempt to evade the usual security protocol of blocking access to and from newly registered domains (NRDs).
  • All three domains' WHOIS records are privacy-protected albeit by varying organizations (i.e., ervsystem[.]com by Anonymize, Inc., infinitysoftwares[.]com by, and mobilnweb[.]com by WhoisGuard, Inc.). Since the domains were not new, they all had previous owners, some of which were named, although they are probably domainers, given the number of domains the registrants owned.
  • The three domains each had a different registrar (i.e., ervsystem[.]com — Epik, Inc., infinitysoftwares[.]com — NameSilo LLC, and mobilnweb[.]com — Namecheap, Inc.).
  • Two of the three domains (i.e., ervsystem[.]com and infinitysoftwares[.]com) were registered in the U.S. while mobilnweb[.]com was registered in Panama.

To see if additional artifacts could be added to the updated list of IoCs CISA published, we queried the newly added domains on DNS Lookup API. Our findings are listed below.

  • The three domains were connected to 11 IP addresses, nine of which are not included even in CISA's updated IoC list. These nine IP addresses are:
    • 85[.]17[.]31[.]82
    • 85[.]17[.]31[.]122
    • 178[.]162[.]203[.]202
    • 178[.]162[.]203[.]211
    • 178[.]162[.]203[.]226
    • 178[.]162[.]217[.]107
    • 5[.]79[.]71[.]205
    • 5[.]79[.]71[.]225
    • 172[.]97[.]71[.]162
  • Six of the nine additional IP addresses listed above are tagged "malicious" on VirusTotal. These are:
    • 85[.]17[.]31[.]82
    • 178[.]162[.]203[.]202
    • 178[.]162[.]203[.]226
    • 178[.]162[.]217[.]107
    • 5[.]79[.]71[.]205
    • 5[.]79[.]71[.]225

To find out more about the additional IP addresses we uncovered, we subjected them to reverse IP/DNS lookups, which revealed that:

  • All of the six malicious IP addresses cited above were connected to at least 300 domains each.
  • Interestingly, they all resolved to the malicious domain 0-0-0-0-0-0-0-0-0-0-0-0-0-10-0-0-0-0-0-0-0-0-0-0-0-0-0[.]info.

As this post showed, further scrutiny of IoCs using domain and IP/DNS intelligence tools can uncover more artifacts. That said, organizations may not need to stop at including publicized IoCs to their blacklists and can strive to cover as many potential additional attack vectors where possible.

If you're a security researcher, architect, or product developer working toward making the world safe from threats, contact us if you want to know more about the artifacts mentioned in this post or just want to collaborate with us for any security research initiative. | 16-Feb-2021 18:18

Google Set to Survive in Oz, but Far Bigger Threats Are on the Way

The signs are that the Australian Government and Google are close to a compromise. The Government's main demands stay in place, but some of the details will be changed. This allows the Government to claim victory, while the damage to Google will be limited. See also my earlier blog on this topic: Google vs. Australia — the war is on.

Publishers will, in one way, or another be paid for news, either through a payment based on the value of the news and the value of the Google search facility. An arbiter in the middle will come up with fair arrangements.

The other option will be for publishers to use Google's News Showcase based on a partnership between Google and publishers and get paid that way. Several Australian publishers have already signed up for this service.

It was interesting to see the tactics that used in this power play. Microsoft became involved at its highest levels, indicating to be more than willing and able to offer an alternative with its Bing search engine. If it had come to that, this obviously would be one of the few alternatives for Australia.

But let us be honest, it is not for nothing that Bing only has a 4 percent market share, while Google has a 90 percent market share. Whatever way you look at it, Google is by far the more superior product. It would not have been a good outcome if Google would no longer be available to Australian users.

Furthermore, it would be extremely destructive for tens of thousands of Australian businesses who, to a large extent, depend on Google for advertising.

While all of this is positive, at the same time, it is not taking away the broader issues of monopolies such as Facebook and Google. It has already become clear that it will not be enough to just regulate the existing giants. We need to look at the underlying elements, being their platforms.

The competition regulator in Australia, the ACCC, has already flagged the dominance of Google in the advertising market, and now we are getting more to the core of the problem. While most will look at Google as a search engine company, it is equally an advertising company. This is where it makes its money.

As we have seen in recent years, the advertising principle — that is, maximizing clicks — are often in conflict with social values. The algorithms favor fake news, conspiracy theories and criminal intent as this increases exposure for their advertising.

These algorithms also create echo chambers often used only to emphasize the fake news and conspiracies while gathering more market exposure for Google and its advertisers.

Using search, maps, YouTube and so on, they gather massive amounts of personal data from its users to make their advertising product more effective.

The current business models of the digital companies are not only undermining competition but also eroding privacy and many of the core issues of our democratic institutions.

The digital giants are predominantly American businesses. They are totally driven by profit and thus have been able to abuse their market position in relation to privacy and competition matters. By doing so, they now have market capitalizations that are totally out of any economic proportion. This might be a fleeting achievement, but in the long-term, it is unsustainable.

The power they have gained allows them now also to dictate the supply line, allowing them to extract "rent" from their suppliers, as they have little choice than to adhere to the terms and conditions of these platforms.

Amazon, Uber and Airbnb are all under investigation somewhere in the world for such practices.

Obviously, these platforms are immensely useful for our societies and economies, so we do not want to get rid of them. The importance of these platforms has become very clear during the pandemic, as they have become essential national and international infrastructure.

As the Europeans argue, the platforms eventually will have to become neutral on which a range of business models can be built independently and in competition with one another. These neutral platforms can be used by the digital giants, other businesses, as well as by governments, communities and so on.

Unraveling these complex issues and building new principles around platforms could easily take a decade or more, so we better get on with that job. We might need to look at a structural separation between the platforms and the services provided on top of them.

Social media companies are very much aware of the political pressure that is building up around the world. This is a real threat to their business models. They are working on internal systems to address these issues. However, it is unlikely that they are prepared to go deep enough to keep the regulators away.

Both in the U.S. and in Europe, serious plans are underway to curb the power of these companies. The new U.S. Government is proposing changes to section 230 of the Communications Decency Act 1996.

It classifies internet companies as telecoms companies, therefore making them exempt from content laws. The proposal is now to make changes to section 230 and render digital platforms responsible for harmful and criminal content.

Beyond that, to address the platform issues in America, this will require an overhaul of their anti-trust scheme as the current regulatory system is unfit for the current digital environment.

The OECD is another organization focusing on these companies in relation valuation of digital innovations and taxation.

While Google in Australia might be off the hook, for now, there are many more international actions planned to address these issues.

As the Australian case shows, these digital giants are now big enough to challenge and even threaten sovereign governments. The longer we wait with reigning in these powers, the more governments can be intimidated by these companies, and the more difficult it will become for governments to come up with decisive action.

On the other side, all these actions by governments and regulators should be a warning sign for those digital moguls. They are accountable to their shareholders. Those shareholders could lose significant amounts of money if these companies are put under stringent regulations.

Perhaps a better outcome for them would be to accept the social and economic responsibilities of their businesses and support them to change their business model to better reflect their role in our society and our economy.

Written by Paul Budde, Managing Director of Paul Budde Communication | 16-Feb-2021 18:03

EU Rulings on Geo-Blocking in Digital Storefronts Will Increase Piracy Rates in the Developing World

For the longest time, it was an insurmountable challenge for those in the developing world to be able to afford to legally consume multimedia products. Prices originally set in Dollars, Euros or Yen often received insufficient adjustments to compensate for lower incomes, something that was compounded by local import or manufacture taxes that did little to alleviate matters. Markets starving to consume modern-day products were faced with an unrealistic pricing structure that was a bad fit for most citizens, especially for young people growing up without access to much capital in an increasingly digital world.

In the 1990s and 2000s, piracy thrived in regions such as LAC, CIS, Eastern Europe, Southeast Asia, among several others. In countries such as Brazil, it was often more usual to pirate than to purchase official goods when it came to products such as music, films, software, and video games. City centers were (and to a smaller extent still are) overtaken by street vendors selling piles of CDs and DVDs with photocopied covers housed in plastic envelopes and sold at a fraction of the cost found in legitimate stores.

The industry was, by and large, occupied waging intellectual property battles against developed world university students and upholding advantageous sales models such as that of the music CD, never quite looking into the issues of the developing world with a special lens. Solutions in developing countries were mostly focused on extensive police action against those selling the pirated goods, which amounted to little in the way of results, seeing as the demand side of the equation remained unaffected.

In the 2010s, developing markets observed a significant shift towards the consumption of legal goods as the young people grew into productive adults with access to more capital, and two factors can be seen as decisive in this shift: first, digital storefronts with price differentiation for countries with lesser purchasing power were established; second, streaming platforms offering flat fees for a relatively extensive catalogue of multimedia became available. This provided the correct incentives for many consumers not to want to be bothered by the hurdles of consuming pirated content.

In early 2021, the European Commission made a decision that is a subversion of proven best practices that convert informal consumers into legitimate purchasers, and if this act is a sign of their broader intentions, the only possible result is the increase in piracy rates in the developing world. Steam (the largest digital storefront for video games globally) and five major game publishers were fined to the tune of 7.8 million Euros for alleged "geo-blocking" practices. According to the official press release: "The Commission has concluded that the illegal practices of Valve [Steam's parent company] and the five publishers partitioned the EEA market in violation of EU antitrust rules."

In the same press release, the complaint is said to be rooted in: "bilateral agreements and/or concerted practices between Valve and each of the five PC video game publisher implemented by means of geo-blocked Steam activation keys which prevented the activation of certain of these publishers' PC video games outside Czechia, Poland, Hungary, Romania, Slovakia, Estonia, Latvia and Lithuania."

In other words, price differentiation meant to account for different market realities has been equated to geo-blocking, and thus deemed illegal within the EU. There are several reasons why this argument is inconsistent, but foremost is the stretching of the meaning of geo-blocking, which is normally understood to be the practice of stopping a user from being able to view specific content if they are accessing a certain service from a given region or their account is tied to a given region.

In this case, users from different regions are able to see the same products in the storefront, but if the user is purchasing the game from Poland, they will be offered a significantly reduced price in relation to a purchase originating from Germany. Since Steam's activation keys can be gifted and traded between users, this system ensures that a key bought within a lower priced region cannot be activated in one where the product costs more. While this method is not fool-proof, it works well enough for publishers to adjust prices in a way that makes sense for their audience. This is a widespread tactic adopted on the Steam storefront by almost all publishers, being not limited to the 5 that were cherry-picked by the EC to serve as examples.

There is clear correlation between the region cited as having lower prices and reduced local purchasing power. According to the World Bank, the GDP per capita/PPP Int$ in Hungary, Poland, Romania, Slovakia, and Latvia is within the 32-35.000 range, in comparison to Germany's 56.000 or Ireland's 88.000. Forcing the games to be sold at a flat price across the EU fails to acknowledge these disparities. It does not promote fairness, as the bloc does not have a homogenous economic reality.

On the aftermath of the ruling, a statement from the EC's Executive Vice-President Margrethe Vestager, in charge of competition policy, is of particular note: "The videogame industry in Europe is thriving, and it is now worth over € 17 billion. Today's sanctions against the 'geo-blocking' practices of Valve and five PC video game publishers serve as a reminder that under EU competition law, companies are prohibited from contractually restricting cross-border sales. Such practices deprive European consumers of the benefits of the EU Digital Single Market and of the opportunity to shop around for the most suitable offer in the EU."

The question that follows from Vestager's statement is: shop around how? Users can go to a different storefront altogether if they so desire, but how is this supposed to take place within the system of a single storefront in a manner that, as described by Vestager, would create more consumer choice? That proposition does not add up. What is being forced into practice is that users in the developing world will need to pay the same price for digital goods as those in the developed world, without sensitivity to their local reality. This is a significant step back from decades of progress in terms of creating a fair market where users can make legitimate purchases.

With the EU's growing appetite for the regulation of digital goods and several new regulations in the horizon, it is important to observe this event as an example of what is to come. Under the name of standardization of laws within the bloc, regions where there is less purchasing power will be more often forced into making choices between not having access to unreasonably priced goods or pirating that content. Many will choose the piracy route, and no amount of digital rights management will be able to stop it. Hopefully, other nations and blocs will not follow the EU's example, or we might be headed yet again for very difficult times in the commerce of digital goods.

Written by Mark Datysgeld, GNSO Councilor at ICANN | 16-Feb-2021 17:44

Why Fiber?

As much as I've written about broadband and broadband technology, it struck me that I have never written a concise response to the question, "Why Fiber?". Somebody asked me the question recently, and I immediately knew I had never answered the question. If you're going to build broadband and have a choice of technologies, why is fiber the best choice?

Future-proofed. This is a word that gets tossed around the broadband industry all of the time, to the point that most people don't stop to think about what it means. The demand for broadband has been growing at a blistering pace. At the end of the third quarter of 2020, the average US home used 384 gigabytes of data per month. That's up from 218 gigabytes per household per month just two years earlier. That is a mind-boggling large amount of data, and most people have a hard time grasping the implications of fast growth over long periods of time. Even fiber network engineers often underestimate future demand because the growth feels unrealistic.

As a useful exercise, I invite readers to plot out that growth at a 21% pace per year — the rate that broadband has been growing since the early 1980s. The amount of bandwidth that we're likely to use ten, twenty, and fifty years from now will dwarf today's usage.

Fiber is the only technology that can handle the broadband demand today and for the next fifty years. You can already buy next-generation PON equipment that can deliver a symmetrical 10 Gbps data stream to a home or business. The next generation already under beta test will deliver a symmetrical 40 Gbps. The next generation after that is likely to be 80 Gbps or 100 Gbps. The only close competitor to fiber is a cable company coaxial network and the only way to future proof those networks would be to ditch the bandwidth used for TV, which is the majority of the bandwidth on a cable network. Even if cable companies are willing to ditch TV, the copper coaxial networks are already approaching the end of economic life. While there has been talk of gigabit wireless to residents (which I'll believe when I see it), nobody has ever talked about 10 gigabit wireless.

Fiber Has Solved the Upload Problem. Anybody working or schooling from home now needs fast and reliable upload broadband. Fiber is the only technology that solves the upload needs today. Wireless can be set to have faster uploads, but doing so sacrifices download speed. Cable networks will only be able to offer symmetrical broadband with an expensive upgrade with technology that won't be available for at least three years. The industry consensus is that cable companies will be loathed to upgrade unless forced to by competition.

Is the Easiest to Operate. Fiber networks are the easiest to operate since they transmit light instead of radio waves. Cable company and telco copper networks act like giant antennas that pick up interference. Interference from other wireless providers or from natural phenomenon is the predominant challenge of wireless technologies.

A fiber network means fewer trouble calls, fewer truck rolls, and lower labor costs. It's far faster to troubleshoot problems in fiber networks. Fiber cables are also surprisingly strong, and fiber is often the only wire still functioning after a hurricane or ice storm.

Lower Life Cycle Costs. Fiber is clearly expensive to build, but the cost characteristics over a fifty-year time frame can make fiber the lowest-cost long-term option. Nobody knows how long fiber will last, but fiber manufactured today is far superior to fiber built a few decades ago. When fiber is installed carefully and treated well, it might well last for most of a century. Fiber electronics are likely to have to be upgraded every 10-12 years, but manufacturers are attuned to technology upgrades that allow older customer devices to remain even after an upgrade. When considering replacement costs and ongoing maintenance expenses, fiber might be the lowest-cost technology over long time frames.

Written by Doug Dawson, President at CCG Consulting | 13-Feb-2021 21:07

How to Monitor IP Netblocks for Possible Targeted Attacks

A couple of weeks back, a security researcher alerted his LinkedIn contacts about possibly ongoing targeted attacks stemming from the Iranian subnet 194[.]147[.]140[.]x. He advised cybersecurity specialists to watch out for subnets that may be threatful and consider blocking them. This post encouraged us to look into the subnets and details our findings using IP Netblocks WHOIS Database.

Analysis and Findings

As the first step, we downloaded the daily IP netblocks WHOIS data feeds for three days (i.e., 10 — 12 January 2021) leading up to the time the post was shared. The goal? To see if IP addresses included in the netblock were being tagged as malicious on open-source blocklist AbuseIPDB, which could be indicative of an ongoing campaign.

The IP netblocks WHOIS data feed for 10 January showed that the IP addresses 194[.]145[.]156[.]0/24 (among others) have been modified on January 8. The screenshot below contains an overview of the change:

The feed for 11 January, meanwhile, did not have entries pertaining to the IP netblock in question. But that for 12 January showed various changes made within the netblock as shown by the following screenshots:

  • Change to 194[.]147[.]140[.]0/24, which took place however months before on October 13, 2020.
  • Change to 194[.]147[.]140[.]0/24, which had taken place on January 8, 2021.
  • Change to 194[.]147[.]140[.]0/24, which also took place months before on October 13, 2020.

While these modifications may not necessarily have anything to do with attacks or malicious activity (especially since some changes happened several months before), it is advisable to double check and dig deeper for utmost security.

AbuseIPDB Search Results

Keying in the IP addresses on AbuseIPDB allowed us to determine that the following were tagged as malicious for the reasons indicated:

  • 194[.]147[.]140[.]2: Reported 82 times for port scanning, brute-forcing, and hacking.
  • 194[.]147[.]140[.]3: Reported 87 times for port scanning and hacking.
  • 194[.]147[.]140[.]4: Reported 172 times for distributed denial-of-service (DDoS) attacks, host exploitation, port scanning, hacking, spoofing, brute-forcing, SQL injection attacks, web app exploitation, and File Transfer Protocol (FTP) brute-forcing.
  • 194[.]147[.]140[.]5: Reported 274 times for DDoS attacks, host exploitation, port scanning, brute-forcing, and hacking.
  • 194[.]147[.]140[.]6: Reported 96 times for port scanning, brute-forcing, and hacking.
  • 194[.]147[.]140[.]7: Reported 156 times for DDoS attacks, host exploitation, port scanning, hacking, SQL injection attacks, brute-forcing, and web app exploitation.
  • 194[.]147[.]140[.]8: Reported 144 times for DDoS attacks, host exploitation, hacking, brute-forcing, port scanning, email spamming, FTP brute-forcing, SQL injection attacks, and web app exploitation.
  • 194[.]147[.]140[.]9: Reported one time for port scanning.
  • 194[.]147[.]140[.]12: Reported 17 times for port scanning, hacking, and brute-forcing.
  • 194[.]147[.]140[.]13: Reported 18 times for port scanning, hacking, and brute-forcing.
  • 194[.]147[.]140[.]14: Reported 17 times for port scanning and brute-forcing.
  • 194[.]147[.]140[.]15: Reported 15 times for port scanning and brute-forcing.
  • 194[.]147[.]140[.]16: Reported 16 times for port scanning and brute-forcing.
  • 194[.]147[.]140[.]17: Reported 118 times for port scanning, hacking, host exploitation, hacking, DDoS attacks, phishing, web app exploitation, bot communication, and brute-forcing.
  • 194[.]147[.]140[.]18: Reported 100 times for port scanning, hacking, host exploitation, web app exploitation, bot communication, and brute-forcing.
  • 194[.]147[.]140[.]19: Reported 98 times for port scanning, hacking, host exploitation, web app exploitation, bot communication, and brute-forcing.
  • 194[.]147[.]140[.]20: Reported 732 times for port scanning, hacking, host exploitation, brute-forcing, DDoS attacks, phishing, and web app exploitation.
  • 194[.]147[.]140[.]21: Reported 664 times for port scanning, brute-forcing, hacking, host exploitation, and bot communication.
  • 194[.]147[.]140[.]22: Reported 792 times for port scanning, brute-forcing, web app exploitation, hacking, and host exploitation.
  • 194[.]147[.]140[.]23: Reported 732 times for port scanning, hacking, brute-forcing, and host exploitation.
  • 194[.]147[.]140[.]24: Reported 720 times for port scanning, brute-forcing, hacking, web app exploitation, and host exploitation.

Organizations that are hesitant to block an entire IP netblock can settle for blocking the small subset indicated in IP Netblocks WHOIS Database (i.e., 194[.]147[.]140[.]0/24) or the specific IP addresses that have been confirmed malicious listed above. Priority should be accorded to those reported hundreds of times, namely, 194[.]147[.]140[.]5, 194[.]147[.]140[.]20, 194[.]147[.]140[.]21, 194[.]147[.]140[.]22, 194[.]147[.]140[.]23, and 194[.]147[.]140[.]24.

Note that port scanning, brute-forcing, and hacking, among others, are typical means cyber attackers use to get into target networks. Once in, they can siphon confidential data and send this to servers under their control.

In the featured analysis, IP Netblocks WHOIS Database proved useful in limiting the scope of IP-level blocking, which many organizations hesitate to employ when addressing threats. Instead of blocking an entire IP netblock, security professionals can identify a small subset to dig deeper into.

Want to know more about the artifacts identified in this post for your own research? Or are you thinking of collaborating with our threat researchers? Contact us for partnership opportunities. | 13-Feb-2021 20:49

3 Most Scary Attacks that Leaked Personally Identifiable Information (PII) of Millions of Users

Cybercriminals are increasingly targeting Personally Identifiable Information (PII). The reason being "data is the new gold" in this digital world, and the more sensitive some data is, the more value it has. There is no more sensitive data than personally identifiable information because it contains enough information to identify you digitally.

Examples of personally identifiable information include name, email, contact number, address, social security number, tax file number, banking or financial information, and more such data that helps identify you.

Since the last decade, Personally Identifiable Information (PII) has become a prime issue for businesses as well as individuals, raising concerns over legal and ethical areas. On top of all, poor information security also raises concerns over compromised user credentials. With the growing digitization of each and everything in our lives, we are storing information digitally now more than ever. And yet, the danger is often underestimated by both businesses and individuals. That is the reason behind cybercriminals increasingly aiming to attack and leak or steal personally identifiable information of the netizens. It is no surprise that there had been many attacks in the last decade that combinedly leaked hundreds of millions if not billions of records. Let's review the worst attacks to get to know them: their attack vectors, entry points, and prevention methodologies.


Equifax — one of the largest credit reporting agencies in the US — faced a breach in 2017. Shockingly, attackers are successful in stealing hundreds of millions of records of their customers. The breached data included names, addresses, social security numbers, driver licenses' numbers, and more. Moreover, 200,000 of those records also included credit card numbers, making it the worst data leak in history. In fact, it almost affected 143 million people, i.e., more people than the 40 percent of the population of the United States.

However, this data never saw the light of day on the dark web, raising a theory that this attack was sponsored by a state-backed hacker group in China with the purpose of espionage. After this attack, Equifax invested $1.4 billion to upgrade its security infrastructure. If only it had done it before the attack. It all began in March 2017 when a vulnerability named CVE-2017-5638 was discovered in Apache Struts, one of the popular development frameworks used by Equifax. On March 7, a patch for this bug was released, and on March 9, Equifax admins were told to apply the patch to their systems. However, they failed to do so. It was found that Equifax was hacked on March 10, 2017, but the attackers sat silent for almost two months. On May 13, 2017, they started compromising and exfiltrating data from other parts of the network. Attackers were smart enough to encrypt the data before moving it out of the network, and Equifax was dumb enough not to renew its certificate used for analyzing encrypted internal network traffic. After not renewing it for almost ten months, Equifax renewed this certificate on July 29, 2017. Then, they came to know about the attack. That is, Equifax made multiple silly mistakes, which led to the biggest data leak known in history.

Starwood (Marriott)

Starwood Hotels and Resorts — now owned by Marriott International — became the talk of the town in late 2018 when it announced that it had been attacked, leaking hundreds of millions of customer records. The attack came to light on September 8, 2018, when an internal security tool reported a suspicious activity to access the internal guest reservation systems of Starwood. Marriott took the flag importantly and performed an internal investigation to find that it was compromised sometime in 2014. Marriott bought the company in 2016, but they did not migrate Starwood's original systems to Marriott's even after two years. And the result was that the cybercriminals were able to extract data of almost 500 million guest records by November 2018, making this attack the second-worst attack in history that leaked personally identifiable information.

Investigators found a Remote Access Trojan (RAT) inside Starwood's systems along with MimiKatz — a user credential sniffer. It is believed that these two tools gave administrators access to the attackers, which they further utilized to gain access to internal networks and eventually to the secure systems holding customer and guest records. Shockingly, the leaked data included names, email addresses, phone numbers, and other sensitive information like credit card and passport numbers, posing a disastrous impact on the people. Like Equifax, Starwood (or Marriott) made multiple mistakes that led to this big data leak. For instance, the encrypted sensitive data like credit card numbers, but they kept the encryption key on the same server, making it utterly easy for the attackers to decrypt and steal the data. Some passports were also saved in plain text, while the industry norm is to encrypt all personally identifiable information.


eBay — one of the biggest online e-commerce platforms — was attacked in 2014. Shockingly, the attack leaked 145 million user records. But fortunately, eBay had found no evidence of unauthorized access to the credit card or financial information at PayPal — its payments platform subsidiary that is the most popular method of sending and receiving money on the Internet. Security experts warned its users to stay on alert since attackers had both email addresses and passwords. If they were able to decrypt passwords, they might have tried logging in to other sites using the leaked credentials, allowing them to perform more damage overall.

The leaked data contained email addresses, birth dates, encrypted passwords, mailing addresses, and more personal information of its users. "Exposure of personal information such as postal addresses and dates of birth puts users at risk of identity theft, where the data is used to claim ownership of both online and real-world identities. Users are also at risk of phishing attacks from malicious third-parties, which use the private details to trick people into handing over a bank account, credit card or other sensitive information," according to The Guardian.

In all the cyberattacks listed above, there were some common issues. First of all, organizations were not taking cybersecurity seriously. Period. Secondly, they were not keeping their systems up to date. And last but important, they were unable to detect intrusions and take responsibility for their actions sooner.

Written by Evan Morris, Network Security Manager | 12-Feb-2021 17:01

Brand Abuse is Systemic: The Role of Networks in Brand Abuse

The 2020 COVID pandemic forced businesses to double down on their digital investments as in-person moved online. Brands refined and upped their marketing investments across digital channels — email, websites, social media, apps, and advertising — to serve their customers along their digital buying journeys. And while we were all focusing on managing life in the face of these shifts, malicious actors demonstrated once again that they are quick studies as they increased their investment in these same digital channels to promote their illicit activities on the strength of famous brands. 

These highly-organized bad actors modeled their abuse networks by infringing upon the trademarks, brand names, hashtags, slogans, and even web page text used by the brands they target. To weave their schemes, they used the same promotional vehicles and digital assets as the targeted brands we all love. They built websites and apps, registered related URLs, social profiles, and pages to promote themselves. And in the case of counterfeit and grey-market goods, they created seller profiles and marketplace listings. 

These sophisticated abuse networks appear 'friendly' to their victims, often including customer service chat and bots. Bad actors even take advantage of the latest advances in Martech (marketing technology) to conduct powerful customer — or more accurately, victim — acquisition strategies, using tracking pixels on their web pages and 'retargeting' ads across the web to reinforce their messages. 

In short, economically-motivated bad actors apply modern digital marketing strategies and technology to conduct systemic brand abuse and take advantage of consumers using the most loved brands as bait. 

To understand the scope of systemic brand abuse, the Appdetex team used our patent-pending Appdetex Tracer™ technology to profile these highly-organized networks of bad actors.

How big is the problem?

On average: 

  • Abuse is systemic. 25% of abuse is part of a systemic abuse network.
  • Abuse networks grow significantly.Over a six-month period, unabated abuse network connections, or Traces, increased by 30% while the number of network nodes grew by nearly 10X.
  • App and domain nodes are most likely connection points. 47% of confirmed abusive apps have related nodes. 27% of abusive domain names are related to other abuse. 
  • Content and Social media provide important indicators of abuse. 14% of websites that did not include infringement in the domain name were related to malicious activity, while 9% of abusive social handles and profiles were related to additional instances of abuse.

We examined a variety of industries, too, and found that, while no sector was immune, the highest incidence of abuse occurred in gaming, music and media, social networks, and, surprisingly, transportation industries. With the rise of COVID, consumers have increasingly relied upon delivery services, making the transportation category a larger target for bad actors.

What does this mean for brand enforcement professionals?

The advent of domain name privacy and proxy services as well as privacy regulation and technologies, have had the unintended consequence of enabling bad actors to hide their identities. As malicious actors learned how they are being tracked, they began using multiple obfuscation layers to make it harder to find their operations and dismantle them.

To solve this problem, we developed Appdetex Tracer. This patent-pending investigative technology uses advanced crawling, scraping, and automated analytics tools to find the unseen digital links between sites, ads, listings, social networking handles, and apps used by these sophisticated networks of bad actors. These links, or Traces, can be as simple as publicly available domain registration data or as subtle as the security, customer service, or marketing and customer service technology providing the criminal network's infrastructure. 

Traces link the individual elements of a brand abuse gambit, whether those elements are a marketplace listing, a website, a social media profile, an ad, or an app.  We refer to these elements as nodes.

Why is knowing which abuse is part of a network important?

Traditional brand protection technologies rely on scanning to identify the nodes of a network, and they exclude the linkages and discovery of related abuse. As a result, brand protection professionals who use these legacy technologies are able to pursue enforcement for the individual nodes but are unable to identify related abuse, leaving large swaths of criminal operations in place and continuing to profit from the targeted brand. A considerable proportion of these unidentified nodes are part of a criminal network, and these networks are often very damaging to brands over long periods. 

How to Combat Systemic Abuse
  1. Use automated investigation solutions to uncover large criminal digital networks. Understanding the traces, finding the nodes of a network, mapping the criminal activity, using offline sources to augment traces and connections, identifying the malicious individual or organization, and developing a strategy for stopping organized actors is an essential methodology for brand protection professionals to defend brands.
  2. Employ an abuse map and a strategy to stop organized malicious actors. If you know your adversary, you can employ advanced techniques to disable malicious actors and dismantle their networks. These advanced techniques range from stopping payment processing capabilities to initiating a mass UDRP or even litigating. These activities have a higher chance of a successful outcome with the intelligence provided by automated intelligence and investigation solutions that document bad-actor networks.
  3. Use individual enforcements to take down incidental abuse and more advanced means to stop malicious abuse networks. Often abuse networks rely on unbranded infrastructure to monetize their schemes. Their apps, checkout pages, marketplace listings, and social commerce sites may have no branding whatsoever. So, dismantling the promotional network incrementally — one ad, social profile, marketplace listing, or domain at a time — doesn't stop bad actors; it only slows them. The most sophisticated networks of bad actors can absorb slowdowns and re-create their infrastructure reasonably quickly, putting them back into targeting your business.

Our survey findings are based on the confirmed abuse and resulting enforcement of more than 100,000 online infringement cases and instances of abuse over a one-year period. Among the industries included in the study were Music & Media, Transportation, Retail, Gaming, Communications, Healthcare, and others. 

We used our patent-pending Appdetex Tracer technology to investigate these instances of infringement.  With Appdetex Tracer, we were able to map the networks, document the connection points, or 'traces,' and identify additional network nodes. As a result, we were able to uncover additional related abuse networks targeting the industries in the study.

To find out more about modern brand protection, you can request a demo here. | 11-Feb-2021 23:13

5G a Fizzle With Consumers

The cellular companies have made an unprecedented push to get customers interested in 5G. Back in November, I recorded a college football game that enabled me to go back and count the twelve 5G commercials during the game. Advertising during sports events is relatively expensive, so these ads were not purchased at bargain-basement prices. The amount of money being spent on advertising 5G must be gigantic.

It looks like all of that advertising is not having the impact that the cellular companies want. JD Powers conducted a series of large surveys near the end of 2020 and found that the public was less than enamored by 5G, even after all of the advertising.

In good news for the cellular carrier, the advertising has created awareness of 5G, and 92% of those surveyed had heard of 5G. However, only 26% believe that 5G is faster than existing 4G cellphone broadband. The response that cellular carriers will find troubling is that only 5% of those surveyed would pay more to get 5G. Only 4% are willing to switch cellular providers to get 5G.

The only companies that are making money on 5G are the cellphone manufacturers. Throughout the fall, all of the big cellphone companies put a big push on having 5G in the phone. However, that advertising might not be having the desired impact, and I've noticed that recent cellphone ads focus on the cameras in the newer phones instead of 5G.

None of this is particularly surprising because the web is full of stories about how 5G speeds are disappointing. Numerous reporters have compared 4G and 5G coverage in the same locations and often found that 5G is slower than 4G.

But there is a more fundamental question that the cellular companies have never addressed with customers. Why do customers need faster cellphone data speeds? The biggest bandwidth functions performed on most cellphones are watching a video or playing games — and 4G data speeds are more than adequate for these needs. Cellphones don't suffer from having multiple users trying to use the bandwidth at the same time. Unlike with home broadband connection, I can't recall hearing of people complaining that cellphone data speeds are too slow. People complain about coverage gaps where they can't get service, but there doesn't seem to be any groundswell asking for faster cellphone data.

Most people don't realize that the cellular companies have no choice in the way they are rolling out 5G. The 4G cellular networks are swamped and overloaded, and if the cellular companies didn't act, they were facing the collapse of the 4G network during busy hours.

The carriers have taken some of the stress off the 4G network by deploying small cell sites. But like with many other things, small cell site deployment slowed down during the pandemic. The carriers have introduced new spectrum bands, and that is what they are currently labeling as 5G. The real point of the 5G advertising is to lure people to buy and use phones that use the new spectrum bands, which reduces the pressure on the traditional cellular spectrum.

Eventually, the carriers will deploy the real 5G, which means using the 5G specifications. That will complete the third leg of cellular improvements. Some of those 5G features will significantly improve cellular networks. For example, a single cell site will be able to handle up to 100,000 connections at the same time eventually. Cellphones will not only try to connect to the nearest cell site but will be able to connect to other cell sites and will even be able to connect to more than one cell site at the same time. The 5G improvements are all aimed at helping urban cellular coverage and won't make much difference in rural markets.

For now, the only new 5G feature that has been deployed is dynamic spectrum sharing (DSS). This feature lets a carrier mix 4G and 5G customers in the same spectrum bands. This feature allows the cellular companies to shuttle customers away from the busy spectrum to relieve pressure on the network.

I don't know that we'll ever learn the extent to which these various efforts are helping the cellular networks. The cellular companies have been careful for years not to publicly discuss the 4G crisis and are not now likely going to divulge the details of how they are fixing network problems.

Written by Doug Dawson, President at CCG Consulting | 10-Feb-2021 20:07

Now We Know Why It's Hard to Get a .COM

As executive director of CALinnovates, an organization that advocates for innovation and startups, as well as a new business owner myself, I know how important a .COM domain name can be to a new company's online presence and marketing strategy. That's why I read with interest a new Boston Consulting Group report on how the .COM market is changing. I have a much better understanding of why new businesses find it hard to get relevant .COM domain names.

According to a Boston Consulting Group report, domainers — speculators — are on the verge of becoming the biggest players in the .COM market. Here's what BCG said: "The net result is that, on a dollar basis, the secondary market, at $2.1B/year, is almost as big as the primary market, at $2.3B/year, and nearly double the size of the registry's wholesale revenue of $1.1B/year. In other words, nearly half of the dollars end-users spent buying new domains go to domainers."

That has broad implications for anyone trying to get a .COM domain.

First, it means that many currently registered .COM domains are locked up by domainers. That in itself distorts the market and makes it challenging for a startup, organization, or person to get a domain that closely matches their online business or purpose.

Second, the scarcity created by locking up so many domains warps prices. According to BCG, the typical domainer price ranges from $1,700 - $2,500, but the average registrar retail price is only $16.58. And the price of a .COM domain is only $7.85 from the registry.

I found this out firsthand when searching for a domain name for a venture that I launched in 2020. The ideal domain was registered but not in use. If I were willing to pay GoDaddy $119, they would try to get it for me, but they calculated it would likely cost me $5,146. That's a lot of money for any startup to pay, but it is consumers who ultimately have to pay these costs.

Speaking as a person who advocates for technology policies that foster innovation and enable new businesses to emerge and flourish, the .COM market seems upside down — it's much more likely going forward that new businesses will be forced to deal with domainers than regular registrars.

And as a new business owner myself that doesn't seem right that entrepreneurs like me will be diverted to the secondary market that charges 150-200 times the retail price (according to BCG) just to establish a preferred online presence.

I don't know what the answers are here, and I'd encourage anyone interested in the domain world to read the BCG report. But it does seem that in the future, those looking for the right domain name are likely to spend more time digging deeper to pay a domainer.

Written by Mike Montgomery, Executive Director at CALinnovates | 10-Feb-2021 20:00

Let's Not Forget About Solar Flares

As the world becomes more and more reliant on electronics, it's worth a periodic reminder that a large solar flare could knock out much of the electronics on earth. Such an event would be devastating to the Internet, satellite broadband, and the many electronics we use in daily life.

A solar flare is the result of periodic ejections of matter from the sun into space. Scientists still aren't entirely sure what causes solar flares, but they know that it's somehow related to shifts in the sun's magnetic field. The big balls of the sun's matter discharge vast amounts of electromagnet energy in a wide range of particles and spectrum. Solar flares are somewhat directional, and the earth receives the largest amount of radiation when a flare is aimed in our direction.

A solar flare can happen at any time, but we know that peak solar flares are on an 11-year cycle, with the latest cycle started in December 2019. Scientists can see solar flares about eight minutes after they occur. The radiation from a flare hits the earth for the period between 17 and 36 hours after the flare.

Solar flares cause damage when the radiation from a flare pierces the protection afforded by the atmosphere. Small flares barely make it to earth and don't cause much damage. But a large flare can pepper the earth's surface with radiation that can spread stray signals through electrical wiring and cause damage to the components of the electric grid and any devices connected to it. Solar flares are particularly damaging to objects in space and can destroy electronics in satellites and even cause them to fall out of orbit.

The earth has been hit with big solar flares in the past. The biggest flare that we know about happened in 1859 and blew up telegraph equipment around the world. That solar flare pushed the aurora borealis as far south as Hawaii. In 2012 the earth missed a similarly large flare by only a week when the flare hit behind the earth's orbit around the sun. A smaller flare in 1989 knocked out electricity in Quebec for nine hours. During Halloween week of 2003, there were a number of reported problems when 17 small solar flares erupted simultaneously. During that event, airplanes were rerouted, satellites were powered down, and the aurora borealis could be seen as far south as Florida.

The impacts of a big solar flare of the magnitude of the 1859 one would be devastating to electronics. NASA scientists estimated that a direct hit from the 2012 solar flare would have done over $2 trillion in damages to our electric grids.

Our reliance on electronics has skyrocketed since 2012. We now do a huge amount of our computing in the cloud. Our homes and businesses are full of electronic devices that could be damaged or ruined by a big solar flare. We communicate through hundreds of thousands of cellular sites. We are just in the process of delivering a lot of bandwidth from small satellites that could be destroyed or disabled by a big solar flare. Many routine daily events now rely on GPS satellites and weather satellites. We are moving manufacturing back to the US by the use of robotized and automated factories. An event that would have caused $2 trillion in damages in 2012 would likely cost a lot more today and even more damage in the future.

The purpose of this blog is not to cry wolf. But networks ought to have a plan if a giant flare is announced. The only good way to protect against a giant flare is to unplug electronics and remove devices from the grid — something that's not easy to do in a modern network on less than 17 hours of notice. NASA estimates that the probability of a big flare this decade is around 12%, and that's large enough to be worried about. But, inevitably, we'll eventually get hit by one. Solar flares are a natural phenomenon, just like hurricanes. Still, unlike hurricanes, we increase the amount of theoretical damage from a big solar flare every time we make our life more reliant on electronics. But unlike hurricanes, we mitigate against the worst damage if we react quickly enough when a big flare is on the way.

Written by Doug Dawson, President at CCG Consulting | 08-Feb-2021 21:08

Starlink Broadband Service – More on the Beta Plus Exciting Video

Starlink Dishy Installs Itself – One of the revolutionary advances in Starlink's satellite-based broadband service is that the dish, nickname dishy, installs itself. This video shows a newly installed dishy finding a satellite and then orienting itself for future service.

If you have last-generation satellite internet access, broadband from a wireless ISP (WISP), or even satellite television from DISH or DIRECTV, an installer came and carefully aimed a dish antenna for you. Starlink, a broadband access service from Elon Musk's SpaceX company, reimagines the install process and, in most cases, eliminates the need for an installer. The Starlink dish can sit on the ground or the peak of your roof; more importantly, it aims itself, as you see in the accompanying video.

BTW, the dish is heated to melt snow or evaporate rain — that's why mine has an icicle beard.

On reflection, not surprising that this dish does a robotic install; it talks to satellites launched by rockets that guide themselves to barge landings for reuse, and it's a cousin of the almost self-driving Tesla.

There is a kit for putting the dish on the ridgeline of your house, which uses bricks to weigh down legs draped over the ridgeline. If you can get to the ridgeline of your house, you don't have to make holes in the roof to put the dish there. Starlink has another kit for mounting the dish on a pole and a kit for sealing around a hole you may have to make in your house to get the combination power and data wire (power over ethernet) inside. I found a vent, so I didn't have to do that.

WiFi setup

Starlink comes with a vanilla WiFi router. You can set the name of the network and password from a smartphone app, but you can't do any sophisticated management of the router itself. If you do need more capability, you can plug whatever wireless router you have been using into the AUX port on the Starlink router, retain whatever management instructions you set up previously, and also continue to use any direct ethernet connections you made from your old router. I have a fairly sophisticated ORBI set up to reach the corners of my house but just had to plug the ORBI base router into the Starlink router. The base and satellite ORBI routers continued to function as usual.

Beta test update

Last week when I wrote about my beta experience, I was having about three outages an hour, averaging about 16 seconds each. Not a surprise in what is advertised as the "Better Than Nothing" Beta; but, because of these interruptions, I wasn't using Starlink for Zooming. However, since then, Starlink has realigned existing satellites, launched 60 more, and I have moved my dish away from some obstruction. Now less than two interruptions an hour with an average duration of eleven seconds. This is as good as either the DSL service I get from Consolidated or my wireless ISP. According to the Starlink app, in the last twelve hours, my view of a passing satellite was obstructed for two minutes, there were no satellites for my dish to see for 12 seconds, and downtime to make adjustments to the beta service totaled two minutes. I do plan to move the dish again, but even if I don't, Starlink is launching many more satellites, which should make both the obstruction and no-satellite-available problems go away. Beta-induced outages should end with the end of the beta (summer?).

Starlink is working flawlessly, as far as we can see, for streaming. I am now using it for Zoom and Skype, but the real test of that'll be tomorrow when I have some business zooms.

See also: My Experience With Starlink Broadband, It Passes "Better Than Nothing" Beta Test and Is Starlink the Tesla of Broadband Access? I Have a Chance to Find Out

Written by Tom Evslin, Nerd, Author, Inventor | 08-Feb-2021 20:37

Boosting Domain Protection Strategies with Typosquatting Domain Intelligence

An enterprise's domain portfolio continues to change as it offers new products and services or withdraw old ones. Mergers, acquisitions, and buyouts would also affect its domain portfolio. Constant monitoring of one's domain portfolio and its related infrastructure is crucial in today's cybersecurity landscape. Overall domain protection not only saves a company's network from specific threats but also helps protect its clients and website visitors from attacks.

A part of domain protection that can be overlooked is checking for potential typosquatting domains. These are domains that look similar to an organization's domain or brand name that threat actors can use to imitate the company.

Typosquatting domains may be used to make phishing emails appear more credible and authentic. For instance, a parent who sometimes purchase Lego toys would be more likely to believe in the credibility of an email address like example@legoslegos[.]ru than one not containing the brand name "Lego." As part of possible phishing endeavors, typosquatting domains also let threat actors create websites that look identical or similar to an organization's official website.

Below is a side-by-side screenshot of the official Lego website (on the left) and legoslegos[.]ru (on the right):

Among the most telling signs of a typosquatting domain is that it doesn't share the same WHOIS registration record as the brand's official domain. Most large enterprises do not hide their WHOIS record details, as in Lego's case, whose registrant organization (LEGO Juris A/S), email address, and other information are publicly available through WHOIS Lookup.

The WHOIS record of the domain legoslegos[.]ru, on the other hand, is unavailable or hidden so there is a high possibility that it is not owned or managed by The Lego Company.

An Example of Typosquatting Domain Protection Analysis

We randomly selected five enterprises (see the table below) to illustrate crucial checks included in the domain analysis and protection process. These organizations' stocks are publicly traded on the New York Stock Exchange (NYSE) and other markets.

We then used a domain and subdomain discovery tool to see the number of domains that contain text strings related to the five organizations. The table below shows the results that contain the company names.

Company NameOfficial DomainKeyword UsedNumber of Look-Alike Domains FoundCPA Australiacpaaustralia[.]com[.]aucpaaustralia45Danonedanone[.]comdanone1,774MIRVACmirvac[.]commirvac167PCCWpccw[.]compccw1,226Vertiv Holdingsvertiv[.]comvertiv916

We also ran the look-alike domains returned on Bulk WHOIS Lookup to see how many of them have WHOIS records that differ from those of the official domains. These are potential typosquatting domains, and monitoring them could help make domain protection programs more comprehensive.

As shown in the chart, three out of the five companies face the situation where 90% or more of their identified domain footprints are potential typosquatting domains. The other two still have a high percentage of typosquatting domains, at more than 75%.

Diving into the specifics of those domains, we found that an average of 15% of the five companies' typosquatting domains are less than a year old as of this writing. However, a significant percentage are more than 4 years old, with an average of 10% falling between five and 10 years of age and 28% registered more than 10 years ago. Such findings tell us that while it is important to monitor newly registered domains (NRDs) for signs of typosquatting, some older domains could also warrant close observation.

What's more, the most commonly used top-level domains (TLDs) for these domain names include .com and .net. We also saw country-code TLDs (ccTLDs), such as .nz and .es, along with new generic TLDs (ngTLDs) like .tk and .xyz. Domain protection may entail monitoring all TLDs, rather than focusing on newer and most commonly abused ones.

Domain protection is a crucial cybersecurity practice that aims to protect the domain owner and his or her users or clients. It is a never-ending and constantly evolving process, which includes checking for typosquatting domains, among others.

Are you a security researcher, product developer, or security officer working on ways to improve your domain protection strategies? Contact us for more information on the potential typosquatting domains and vulnerabilities mentioned in this post. | 08-Feb-2021 20:12

Technology: Doomsday or Godsend – the Choice Is Ours

Following the American insurrection and the role the media played — social media, particularly the "doomsday scenario," started to appear again in relation to technological developments. Only a few years ago, a group of hi-tech companies, including Tesla, warned against the negative aspects of artificial intelligence (AI).

Other technologies that could seriously affect human developments include gene editing, nanotechnology and synthetic biology.

Technological doomsday scenarios are possible. A decade ago, I would have said that we as humanity would prevent those doom scenarios from happening and that we have heard of doomsday scenarios before but that we, with combined effort, had been able to manage them (for example, the nuclear threat).

The nuclear issue is a good example as we, as a global community (despite being at that time involved in a cold war), could prevent a doom scenario from happening.

The reason why I am more cautious now is that we are dismantling our global collaboration structures. China and Russia are not the sort of societies most of us subscribe to, but we have no other choice than to be in dialogue with them, and the current policies are more based on demonizing, and that makes any form of collaboration nearly impossible.

Brexit is another example of stupidity, and America withdrawing — under its previous administration — from and/or undermining international institutions is another serious setback. While some of this will be reversed, some serious damage has already been done.

All the current technological development issues are of a global nature; none can be "solved" by single nation policies or actions.

The current crisis around social media shows what happens. These companies can, in a commercial sense, only thrive in a global market. They have now become so big that they threaten, in this case, social cohesion and democracy. We as humanity have failed to build a structure around it to prevent the misuse as we have been witnessing in America.

The fact that Chinese digital mogul Jack Ma, from Alibaba, was silenced for months shows that China understands this threat. His company was getting so big that it started to threaten China's Communist Party (CCP). Like it or not, the Chinese Government decided not to let that happen, surely also warned by the examples they saw in the USA.

If we in the Western democracies want to maintain our way of life, we can only do so by first getting together and secondly starting the dialogue with others, namely China and Russia, but obviously also Africa and so forth.

There is a lot of focus on the contribution social media has made to the current political crisis in America.

Is it too late for that? Have we created too much damage to our global structures of cooperation? Even if under U.S. President Joe Biden, we can start the healing process, will we be strong enough to take serious action to prevent technology from being used for the detriment of humanity?

What has become clear is that we cannot just wait and see what happens and then try to rectify it afterward. Instead, we do need policies and regulations to ensure that we as humanity stay in control of those developments from the start. Europe is looking at preventative regulations as we discussed here.

I always remain optimistic, but I also want to be a realist. Of course, we can manage our technologies in a way that they assist humanity and not damage us. The question is, do we have the right leadership, political will, and the right economic, social and political structures in place to do so? Are we willing to look at structural changes as I discussed here, or are we muddling on and moving closer to the precipice and waiting for another crisis? Or will one of those future doomsday scenarios be a real existential one?

I would argue that we do need technologies to face off the many challenges that we are facing. They are just tools, and humans are toolmakers, and we have always used them to advance humanity. So, we can and should use technologies to assist us in overcoming doomsday scenarios. The choice is ours.

Written by Paul Budde, Managing Director of Paul Budde Communication | 08-Feb-2021 19:59

Reflections on the Pandemic Effect on Internet Use and Democracy

Last year, around the same time, the release on the same day of two flagship reports on 'the Internet' had prompted me to write an article on CircleID entitled 'Connecting the Next 46 Percent: Time to Pick the Good From the Bad and the Ugly'. I was then prudently asking whether 'the more we connect the world, the less free it becomes?'.

Who would have known that a pandemic would erupt a few months later, unveiling different perspectives in assessing that very same question?

Sadly, even before the outbreak of the coronavirus, democracy and democratization were on the decline worldwide, as evidenced by data from Freedom House and V-Dem. While the pandemic has certainly created more demand for Connectivity, it is unlikely to alleviate calls for more democratic governance.

The conventional wisdom of the late 90s that the Internet is inherent of a democratic nature is nowadays less and less obscuring the net impact of Internet Use on democracy, and this, in almost all countries across the democracy and development continuum. Even the US will likely rate lower this year, affected by the toxic polarization effect of Social Media (concept captured by V-Dem indices).

SMART or not, regulation seems unavoidable, but what might that look like?

Experts and researchers are exploring concrete options.

Written by Kitaw Yayehyirad Kitaw | 08-Feb-2021 19:25

Taking the Long View: Will This Be the Year ISPs Rethink Their Business Plans

I have to wonder if this year is making the big ISPs rethink their business plans. For years, many big ISPs have foregone making long-term investments in broadband and instead chased the quick return.

A good example is CenturyLink. Before the merger with Level 3, the company had started a program to replace the copper plant in urban markets with fiber aggressively. At the peak, the company built fiber to pass 700,000 homes a year. This was not a surprising direction for a company that had its roots as a rural telco. The company's executive team understood the huge benefits of building a business that spins off cash year after year. The company clearly envisioned growing to tens of millions of satisfied urban fiber customers.

But that strategy stopped dead cold when the company merged with Level 3. Within a year, the Level 3 team wrestled away control of the company, and I recall a quote by new CEO Jeff Storey that the business was no longer going to chase growth with 'infrastructure returns.' Like most big ISPs, the new CenturyLink management started chasing the quick hit returns. For the last few years, CenturyLink press releases have highlighted the number of urban buildings that the company has added to the network.

And then the pandemic turned that strategy on its head. Everything I read in the business press says that many companies will not be returning in full force to downtown offices. The business real estate market is likely facing a bleak upcoming decade of vacant spaces until the industry right-sizes itself. The likely big downturn in the business real estate market also means a big downturn in the urban broadband market — a strategy for selling to downtown businesses can't be as effective when the businesses are sending staff permanently home to work.

What is so odd about the strategy of the big ISPs is that they would love broadband customers that spin-off big piles of cash. That is the precise business plan that most fiber overbuilders are chasing — make the big investment in fiber-to-the-premise, and then reap the rewards for decades with good cash returns.

Most big ISPs share the same philosophy as CenturyLink, where quarterly earnings and short-term investments are preferred over capital intensive but long-term steady returns. AT&T has built tiny clusters of fiber in markets all over the country instead of replacing all of its copper in its historic market. Verizon has always been the most disciplined ISP and has only built broadband in neighborhoods that meet its cost profiles. This has resulted in a hodgepodge of FiOS fiber scattered throughout the northeast. It's hard to think the company won't use this same discipline in building its fiber-to-the-curb wireless product — some blocks will get the new network while adjoining neighborhoods will be bypassed.

The only big ISP that seems determined to expand by grabbing every possible customer is Charter. The company has clearly recognized that it has won the battle against DSL and is becoming a de facto monopoly in most of its markets. But rather than sit back and collect cash, Charter is aggressively planning to grow to the outer suburban and even rural areas surrounding its markets. To some degree, Charter seems to be the only big ISP that is pursuing a strategy of maximizing economy of scale, where efficiency and profitability are maximized by getting as many customers as possible in a geographic region.

It's interesting to compare AT&T and Charter. AT&T has a few thousand fiber customers in practically every market in the country. Altogether that adds to millions of customers on fiber, but it also means a widely dispersed technician base to service the customers. That's drastically different from Charter, which seems to serve every customer within big circles around major markets. My experience in building business plans tells me that the Charter strategy will be far more profitable in the long run.

None of this would matter much except for the fact that a handful of giant ISPs control most of the broadband customers in the country. The combination of Charter, Comcast, AT&T, and Verizon currently serves 72% of all broadband customers in the country. The decisions of these few big ISPs determine the only broadband options available to millions of us.

Written by Doug Dawson, President at CCG Consulting | 07-Feb-2021 19:27

25 Years of John Barlow's Declaration of Independence in Cyberspace: When Visions Meet Realities

On February 8, 1996, John Perry Barlow published his "Declaration of Independence in Cyberspace" in Davos. Inspired by the "Digital Revolution" and the "Dot-Com-Boom", he predicted a new "Home of Mind," a cyber world without governments. "Governments of the Industrial World", he wrote, "you weary giants of flesh and steel. I come from cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather."

Twenty-five years later, we know that Barlow, who died in 2018, was right and wrong. He was right in predicting a "new world." But he was wrong to expect that this will be a "home" without governments. However, to discuss whether Barlow was right or wrong is probably not the right question. The more interesting issue is the process, which was triggered by his proclamation, not his projection.

The "Story of the Internet"

To understand the "history of the Internet," one can go back to October 4, 1957. The "Sputnik Shock" pushed the Eisenhower administration to establish not only NASA, but also ARPA, the "Advanced Research Project Agency." ARPA operated under the US Department of Defense (DoD) and was tasked to make the United States more resilient against foreign attacks. Both agencies became success stories: In 1969, NASA did send the first man to the moon, and ARPA was presenting ARPANET, a decentralized communication network. In the 1960s, the Rand Corporation, working together with the DoD, recognized the vulnerability of centralized communication networks. The idea was to develop a decentralized network, which would overstretch foreign adversaries' capacity if they plan to destroy the communication system. On October 29, 1969, ARPANET connected computers in Stanford, Los Angeles, Santa Barbara and Utah. For some people, this date is the birthday of the Internet.

1969 was also the year when the Nixon Administration was inviting the Soviet Union to enter into "Strategic Arms Limitation Talks" (SALT) to bring the nuclear arms race under control. This did have consequences for ARPANET. The project did not disappear but wasn't anymore a first priority for the DoD.

There was an interesting side effect. The graduate students, who were involved in ARPANET, continued to think out of the box. The idea to have a network with no power in the center but knowledge at the edges, a network that enables free communication among everybody anywhere regardless of frontiers, was an attractive concept for a new generation which did have — after the painful years of the Vietnam war — their own ideas about democracy, freedom, and self-determination. For them, this toy became a tool to build something new, enhanced freedom, and went beyond traditional borders. New protocols and innovative applications enabled the emergence of a new virtual world with RFCs, TCP/IP, DNS, the "@", the "dot," and self-organized institutions as IETF and IANA. This "new world" was self-regulated by a "netiquette", based on the concept of individual and borderless freedom and populated by "good guys". It was not disconnected from the "rest of the world", but the majority did not really understand what that "network of networks" is about.

The bridge-building to the "rest of the world" started in 1991 with Tim Barners Lee's "HTTP-Protocol". The World Wide Web created new business opportunities that triggered the "Dot-Com-Boom" and the vision of a "New Economy." The Clinton-Administration (1993–2000) realized quickly that "the Internet" is much more than a "technical toy". US Vice-President Al Gore's "National Information Infrastructure Initiative" (NII) from 1993 recognized the far-reaching economic, political and social implications.

From "Technologies of Freedom" tot he "Darkening Web"

Barlow was not the first one who reflected on the broader implications of the "digital revolution". In the 1970s/80s Zbigniew Brzezinski's "Technotronic Era," Ithiel Sola de Pool's "Technologies of Freedom," and Alvin Toffler's "Third Wave" started the discussion. In the 1990s, Manuel Castell's "Network Society," Nichola Negroponte's "Being Digital," and Francis Cairncros's "Death of Distance" were eye-openers. The Silicon Valley "Cluetrain Manifesto" from 1999 took inspiration from the 95 theses of Martin Luther, who kick-started the "reformation" in Europe 500 years ago. "We reject kings, presidents and voting. We believe in rough consensus and running code," said David Clark already in 1993.

In other words: Barlow's declaration was not so exceptionally new. Nevertheless, his statement was a special one. His reference to the US "Declaration of Independence" from 1776 made it much more political. Barlow knew how to use words and talk to people. He wrote songs for the rock band the "Grateful Dead".

In the first place, Barlow's vision inspired many constituencies. I remember a discussion at Harvard, where Charles Nesson mobilized the power of imagination of his audience to remember the moment in Philadelphia's Hall of Independence when the US constitution was drafted, and the institutions of the US democracy were designed. "We have now to build the democratic institutions for a digital 21st century", he said. It was the time when ICANN was seen as a pilot project for "cyberdemocracy" and prepared "global elections" for its "Board of Directors".

Barlow argued in his declaration: "We have no elected government, nor are we likely to have one, so I address you with no greater authority than that with which liberty itself always speaks. I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us, nor do you possess any enforcement methods we have true reason to fear. Governments derive their just powers from the consent of the governed. You have neither solicited nor received ours. We did not invite you. You do not know us, nor do you know our world. Cyberspace does not lie within your borders. We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before."

Five years later, reality took his vision from the Swiss mountains down to earth. In 2001, the "Doc-Com-Bubble" blasted, and 9/11 turned the theoretical debate around "Cyberdemocracy" into a very practical discussion on "Cybersecurity".

Within ten years, the number of Internet users did grow from one million to one billion. The borderless opportunities of the interconnected world were not only used by the "good guys," but it also enhanced the spaces for criminal activities, vandals, hate preachers, pedophiles, terrorists, money launderers and other "bad guys". The new publications did have more pessimistic titles: "The Future of the Internet and How to Stop It" (Jonathan Zittrain) or "The Darkening Web" (Alexander Klimburg). As Jeff Moss, the founder of Black Hat, once argued: "We created innovations to keep the governments out. With the new applications, big money came in. Big money attracted the criminals. And with criminals in cyberspace, it is only natural that governments came back."

Was Barlow wrong? Yes and no. Even if governments are back, they are back in a different way. The world is now a cyberworld. The economy is a digital economy. The new complexity of the global Internet Governance Ecosystem can not be managed anymore in the traditional way. In 2005 — at the UN World Summit on the Information Society (WSIS) — the heads of states of 193 UN member states recognized that the governance of the Internet needs the involvement of all stakeholders, including the private sector, the technical community and civil society. It requires the "sharing" of policy development and decision making. The so-called multistakeholder model became the blueprint for global Internet Governance.

Even if the model has many conceptual weaknesses and is stress-tested by numerous new challenges, there is broad recognition that governments alone will not find solutions for the digital age problems. Referring indirectly to Barlow's declaration, the 'High-Level Panel on Digital Cooperation,' established by UN Secretary-General Antonio Guterres, titled its final report in 2018 "The Age of Cyberinterdependence". Insofar, the "return of governments" is more than the swinging back of a pendulum. It is not "government or the community", it is "government and the community". Humanity is now on a new layer and has still to figure out how this new digital cyber world is functioning, how it can be governed, and how "sharing" can be organized in a political environment dominated by power struggles and moneymakers.

Lessons from the Industrial Revolution?

Today's digital revolution is described now often as the "4th Industrial Revolution". Looking backward, makes it sense to learn some lessons from the "1st Industrial Revolution"?

When the industrial age started in the first half of the 19th century, a 30 years old German rocked the world by arguing that this industrial revolution is much more than steamboats, trains, electricity, factories and the telegraph. He predicted a "new economy" and a "new society". In 1848 Kart Marx called his declaration the "Communist Manifesto." But Marx was soon confronted with the realities of his time. In a speech in London on April 14, 1856, he recognized the deep contradictions: "In our days, everything seems pregnant with its contrary: Machinery, gifted with the wonderful power of shortening and fructifying human labor, we behold starving and overworking it. By some strange, weird spell, the newfangled sources of wealth are turned into sources of want; The victories of art seem bought by the loss of character. At the same pace that humanity masters nature, man seems to become enslaved to other men or to his own infamy. Even the pure light of science seems unable to shine but on the dark background of ignorance."

History did not work as expected by Marx. However, 100 years later, the world was "fully industrialized". The kingdoms, which ruled the world when Marx was a young journalist, didn't exist anymore. They were replaced by rather different types of "republics": On the one side, democracies, based on the respect of human rights and the rule of law. And on the other side, autocracies, based on a one party system with a single man on the top, dictated the rest of the country what to do. Even worse, after two world wars, 1948 did see the start of a cold war between the two blocks. And it took nearly another half of a century until the heads of states of the "two blocks" declared democracy as the winner of the industrial age.

Their vision in the "Charter of Paris" (1991) reads as follows: "Ours is a time for fulfilling the hopes and expectations our peoples have cherished for decades: steadfast commitment to democracy based on human rights and fundamental freedoms; prosperity through economic liberty and social justice; and equal security for all our countries. We undertake to build, consolidate and strengthen democracy as the only system of government of our nations. Democratic government is based on the will of the people, regularly expressed through free and fair elections. Democracy has as its foundation respect for the human person and the rule of law. Democracy is the best safeguard of freedom of expression, tolerance of all groups of society, and equal opportunity for each person. Democracy, with its representative and pluralist character, entails accountability to the electorate, the obligation of public authorities to comply with the law, and justice administered impartially. No one will be above the law."

Isn't this a nice vision? Peace and understanding, prosperity, economic liberty, and social justice for everybody from Vancouver to Vladivostok? And this "vision" came from governments, not from dreamers like John Perry Barlow. However, also this vision did not survive the stress test of reality.

Looking forward: Waiting for the Grandchildren?

1991, when the "Charter of Paris" was signed, the World Wide Web opened the door into the "digital age". 30 years later, the "fathers of the Internet" are now grandfathers. Their children commercialized, politicized and weaponized the cyberspace. The visions of yesterday have disappeared behind the horizon. Today's realities tell us that all the outstanding achievements, the new applications and services, which made our life freer, easier, richer and more comfortable, have a dark flip side. Social networks risk becoming censors; search engines risk becoming global watchdogs; we are surrounded by mass surveillance, biometric control systems, and a swamp of fake news and hate speech. New profitable applications destroy traditional businesses, and it is unclear whether this is "constructive destruction" (Schumpeter) or the road towards a deeper divided society. We have to struggle with cybercrime, misinformation, market dominance, digital trade wars and lethal autonomous weapon systems. Will platform regulations, digital taxation, norms of state behavior in cyberspace, and rules for an ethical approach to artificial intelligence help manage our future? What will the grandchildren of the Internet do with this new generation of problems John Perry Barlows did not touch in his declaration?

History doesn't repeat itself. Nobody knows, how our world will look like 25 years from now. One can certainly expect a "fully digitalized world" in 2046. But will this world be a "civilization of mind"? Will every individual have affordable access to the Net? Can we enjoy the fruits of a successful "green and digital deal"? Will digital progress have improved our environment, education and healthcare? Will there be "decent work" for everybody? Will the world be more "humane and fair"? Or will we have to struggle through a digital "cold war" between cyber-democracies and cyber-autocracies?

To have visions and dreams for the future is always a good thing. It is needed to inspire people, broaden their views, and stimulate the imagination. But one should also be aware that reality takes a different road. Today is a result of yesterday; tomorrow is a result of today. Winston Churchill once said: "A nation that forgets its past has no future". Insofar, I would recommend tomorrow's professors add Barlow's "Declaration of Independence in Cyberspace" to their students' reading list.

Written by Wolfgang Kleinwächter, Professor Emeritus at the University of Aarhus | 06-Feb-2021 20:43

The Internet Isn't Privatized Until .com Is Put Out for Bid

Part 3 of The Netizen's Guide To Reboot The Root: Rampant dysfunction currently plagues the Internet's root zone where a predatory monopolist has captured ICANN and is bullying stakeholders. This harms the public interest and must be addressed — here's how.

Introduction: Whose Registry Is It Anyway?

Previously, this series tackled the terribly awful Amendment 35 to the NTIA-Verisign cooperative agreement and also made the case that the tainted presumptive renewal currently included in registry agreements is inherently anti-competitive. But renewing legitimacy and integrity of Internet governance requires accurately understanding the unique and significant role retained by the U.S. government following the IANA transition.

Much of what ails DNS governance can be traced to the fever dream of entitlement that grips the monopolists operating and overseeing the vast bulk of the DNS — Verisign, which operates .com and .net, the Internet Society (ISOC), which operates .org, and ICANN which oversees them both as well as the rest of the DNS. This sense of entitlement is evident, for example, in ISOC's attempted monetization of the billion-dollar value of .org and which is curiously similar to Verisign's proprietary attitude towards .com and .net.

These registries, together with .gov, mil, .edu, and .int, are the Internet's first registries and were originated solely by the U.S. government in the mid-1980s — long before the Internet Society, ICANN, and Network Solutions (Verisign's predecessor-in-interest) entered the picture. Logic follows, therefore, that any rights, interest, and control in these registries is vested solely in the U.S. government, if for no other reason than because nobody else was there.

This means that .com, .net, and .org must be viewed in much the same way as .gov and .mil — which are controlled by the U.S. government. The difference, of course, is that the U.S. government and military reserve the latter two for their exclusive use while the other three are made available to the public. Much of the rest of the world already seems to understand this intuitively, and this is demonstrated by the fact that, outside of the U.S., .com is already considered to be the de facto U.S. ccTLD.

Another critical factor is Article IV of the U.S. Constitution, which requires Congressional authorization for divestment of Federal property. U.S. law already treats domain names as "property." The Anti-Cybersquatting Protection Act allows trademark owners to bring in rem legal actions against domain names in order to seize the names and adjudicate the rights associated with them. Using this premise, the Department of Homeland Security has seized tens of thousands of domain names involved in copyright infringement.

Lawyers will argue this many different ways, but it seems bizarre to even suggest that domain name registries valued in the billions and that throw off free cash flow like it's going out of style are some sort of special class of valuable things which aren't assets or property and, therefore, aren't subject to Article IV.

It is precisely this sort of nonsensical thinking, abetted by the U.S. government's misguided diplomatic deference to so-called "middle states" and pretense of arms-length disinterest, which is causing a vacuum of legitimate control pertaining to these registries. This has resulted in circumstances markedly similar to deterioration that occurs from disuse and neglect by an absentee property owner. Such circumstances often attract squatters and can presage a neighborhood going to hell. In this case, rushing into the void of control left by U.S. government absenteeism are the organizations operating these registries but also nation-states pursuing their own agendas, particularly China with its preoccupation with both centralized control and censorship.

Notwithstanding wishful thinking, these original registries — which comprise a majority of the global Internet — share the fundamental attribute of having been originated solely by the U.S. government which, absent an Act of Congress, retains one hundred percent of any rights to them. This makes .com, .net, and .org no different than .gov and .mil except that they aren't restricted to U.S. government and military use.

With this frame of reference, let's dive into the final part of saving the Internet in three simple steps.

Ctrl-O: Open the Internet's Largest Registry to Market Competition

The Internet's largest registry has never been subjected to anything resembling a market, and correcting this is the unavoidable last remaining step of the Internet's privatization. It is possible, perhaps probable, and likely even desirable that Verisign will continue operating .com, but only after competitive bidding has rationalized the economics associated with the most popular DNS real estate.

Similar to a tax cut, market-based wholesale pricing would permit .com registrants to reallocate resources to something other than stock buybacks benefitting the world's richest and most powerful institutional investors. This would have an outsized impact on portfolio registrants such as brands and investors — the job creators, small- and medium-sized businesses, and other productive economic engines that, by and large, provide the wealth that is transferred to pay for those stock buybacks.

Also, just as idle hands are the devil's workshop, the monopolist's outsized free cash flow finds pernicious purpose. Right-sizing the economics of .com will reduce the resources available to corrupt the entire ecosystem.

Although mostly forgotten now, introducing .com to the virtues of market economics was always envisioned as part and parcel of privatization. The language of the base cooperative agreement, which was signed by the U.S. government and Network Solutions in 1993, anticipated this by empowering the U.S. government, in its sole discretion, to terminate the .com registry agreement and initiate a competitive action for selecting a successor registry operator.

It may be useful to consider that the continued existence of a cooperative agreement for the only remaining sole-source registry is tacitly admitting that "one of these things is not like the others" and that privatization remains incomplete. Significantly, the cooperative agreement maintains a harmful fiction that the U.S. Department of Commerce conducts oversight that, historically speaking, has been irresponsibly half-hearted at best. Intentionally or not, this pretense has the effect of putting .com "out of scope" for any regulatory activity other than that of NTIA. This causes systemic complacency from other governance stakeholders, including competition regulators at the U.S. Justice Department, ICANN, and the broader stakeholder community.

Such complacency is seen by the lack of any updated Justice Department empirical competition review since 2012 and ICANN's newfound aversion to anything that might paint it as a regulator. This has deleterious consequences to the integrity of governance — one example of which is ICANN's recent decision to sell pricing power to Verisign for $20 million over the unanimous objection of more than 9,000 stakeholders.

Despite what Verisign, ISOC, ICANN and certain others may wish for, privatization isn't achieved by NTIA's laissez-faire approach. Although seemingly counter-intuitive, a strong argument can be made that the single greatest beneficiary of NTIA's previous attempt at consumer protection — the 2012 .com price cap — was actually Verisign. This is because the price cap removed one of the two codependent conditions that the 9th Circuit of the U.S. Court of Appeals had found in 2011 plausibly indicated a conspiracy for the illegal restraint of trade. Thus, NTIA actually gave Verisign de facto immunity from further antitrust litigation while preserving presumptive renewal and kicking the can on pricing power down the road. In retrospect, it is clear that Verisign didn't need the ability to raise prices in order to become the darling of Wall Street — it managed to produce the requisite quarterly growth from organic zone expansion and expense reductions. Far more dangerous was the risk posed by further private party antitrust litigation or government enforcement — both of which are again possible since pricing power has been reinstated.

Privatization also wouldn't be achieved by merely terminating the cooperative agreement, which would irreparably damage the DNS by enshrining the currently weakened and dysfunctional governance in place with something approaching permanence. Recent actions by NTIA — including recent letters to Verisign blocking the auction of and to Congress regarding the failed effort to replace WHOIS — seem to indicate renewed engagement and regulatory focus. This is a step in the right direction, and NTIA should leverage this recent progress and continue building momentum towards renewal of legitimate multistakeholder Internet governance.

The continuing existence of the cooperative agreement clearly signals that work remains to be completed. The U.S. government seemed to acknowledge this with an air of resignation by making the cooperative agreement automatically renew every 6 years with changes made by Amendment 35. The reality is that the ludicrously outsized power, profits and prominence of .com make enhanced oversight necessary for the foreseeable future.

In any event, the concession rights for operating the .com registry must be subjected to a fair, legitimate, and transparent competitive bidding process. The many amendments to the cooperative agreement since 1993 have made this rather more complex, but not impossible. Whereas, before, the U.S. government could act at its sole discretion, now it must request and obtain a final judgment from a federal court before it can terminate the .com registry agreement and initiate a "competitive action" for selecting a successor registry operator.

Justifying such a request might be the Second Amendment to the .com Registry Agreement, which pertains to an auction of that was requested by Verisign and approved by ICANN. Given that NTIA has halted the impending auction because of concerns about violating the cooperative agreement, perhaps the intent evidenced by having already amended the registry agreement without seeking or receiving prior written authorization from NTIA is sufficient grounds for obtaining final judgment. Admittedly, devising the best and most appropriate path for accomplishing this will require talented lawyers. However, allowing the current dysfunction to persist is a recipe for existential disaster and a fundamental betrayal of multistakeholder Internet governance.

But besides being already deficient in some key areas, the current model for .com oversight also doesn't anticipate many of the potential development paths that future circumstances may take. Far from winning the future, the status quo is already losing today and there is a serious need for reevaluating preconceived notions as well as planning for potential new realities.

One such potential reality may be that Verisign is acquired and taken private. While the Committee on Foreign Investment in the United States, or CFIUS, might block an acquisition by a foreign entity, it wouldn't prevent a purchase by U.S.-based private equity, nor would a management-led buyout necessarily face many regulatory hurdles. The transparency of quarterly financial reporting is likely a constraint that Verisign — and the malefactors of great wealth owning it — wish ardently to escape.

Verisign's rotten behavior is prima facie evidence of the harmful consequences of allowing .com to continue on its present course. Remaining sole-source in a privatized Internet is like trying to be half-pregnant — it doesn't work that way and the U.S. government needs to stop procrastinating, dump the wishful thinking, and hit Ctrl-O to open up .com to competition by asking a judge for the final judgment required for terminating the .com registry agreement and initiating a "competitive action" for selecting a successor registry.

Finally, addressing the structural defects laid out in this series won't be easy, but there are simple solutions that can significantly help restore legitimacy and integrity to governance at the global Internet's root. The consequences of the rot at the root are compounding at an accelerating pace, and left unaddressed will go parabolic. Internet freedom and human lives — to say nothing of truth, justice, and fair play — demand decisive action now.

Written by Greg Thomas, Founder of The Viking Group LLC | 06-Feb-2021 19:58

The Internet of Trash

It's often a clear signal that we're in deep trouble when politicians believe they need to lend a hand and help out with regulations. A bill has been passed by the US Congress, and now signed into law, that requires the National Institute of Science and Technology to work with other agencies in developing guidelines for the use of devices that manage security vulnerabilities, patching, together with configuration and identity management.

Either the actions of the market have failed consumers, and some form of public action is necessary to address aspects of this failure, or the situation is so desperately broken and beyond help that the legislature is performing a largely ineffectual action that serves more to disclaim any residual responsibility on the part of the public sector for the mess that we've created than actually to achieve a tangible outcome.

In the case of the Internet of Trash, it certainly has the appearance that everyone, including sundry politicians, are taking a big step backward to stand well clear any form of attribution of blame when we wonder how we managed to stuff it up so thoroughly. And who can blame them? This mess was not of their making.

And without a doubt, this is a pretty significant mess that we find ourselves in!

How did we get here?

The silicon chip industry is truly prodigious. These days a small set of fabrication plants manufacture upward of 30 billion processors each year. The total production of various forms of memory, logic, processing and control systems are worth some half a trillion dollars per year. The market capitalization of the public semiconductor makers and integrators now exceeds a stunning 4 trillion USD, four times their valuation of just five years ago. We are digitizing our world at a phenomenal pace and at the same time embarking on the process to bind them together, often using the Internet as the connective substrate. The Internet may be populated with some 4.6 billion human users. Still, the various conservative estimates of a device census for the Internet typically exceeds 30 billion devices of one form or another. Our label for this is the Internet of Things and the term encompasses all kinds of devices, services and functions. These days we can find these devices in cars, televisions, household security systems, weather stations, webcams, thermostats, power strips, lightbulbs and even door chimes. Beyond the consumer market, there are entire worlds of devices in workplaces, hospitals and diagnostic centers, factories, farms, vehicles of all shapes and sizes, and so on. This is not a recent shift, but it's certainly taken off in recent years as chip costs plummet, chip power consumption falls, battery technology improves, and our ability to embed digital capabilities in all kinds of devices just keeps on improving.

Who gets to look after all these computers? What does "look after" even mean?

The 1980s was the last decade of so-called "mainframe" computers. These devices not only required their own carefully conditioned temperature and humidity-controlled environment but were assiduously tended by a team of specialists who looked after the hardware, kept the software up to date, and even maintained the copious paper-based system documentation. While the capital costs of these devices were considerable, the lifetime operating costs of these mainframe systems were probably far higher. As general-purpose processing computers dropped in size and price and became more tolerant of a greater range of environmental conditions, they relocated out of the computer room to the desktop, then into our pockets, and from there to an existence of being embedded in devices where they are all but invisible. We don't have "computer operators" to tend to these devices. We don't even want to look after them ourselves. We paid so little for most of these "clever" devices that neither the retailer nor the manufacturer or anyone else in the supply chain is remotely interested in looking after their products once they are sold. I guess that we just assume that the computers will be able to fend for themselves!

In some cases, and for some devices, that assumption about the lack of care on the vendor's part as to the fate of the devices they've pushed out into the consumer market is not warranted. Automatic updates have been incorporated into a number of popular computer platforms, where the vendor has assumed some responsibility for the device for a while, and during this time, it exercises some level of remote control to automatically synchronize the device's software version to the current level, applying updates and patches as necessary over the Internet. An Android or iOS platform will upgrade its software from time to time, and the apps on these platforms seem to be living in a constant upgrade cycle. In some ways, it's reassuring that the platform is being updated and known vulnerabilities are being addressed within this regular framework.

However, that's not always the case. In many other cases, it's left to me to look out for software and firmware updates and then go through the process of applying them to the device. Why should I bother to do this? If I bought a camera and still takes photos or a printer that still prints or a car that still seems to work perfectly well as a car, why should I bother? Obviously, the issue is not about a better camera, a better printer, or a smarter car. The issue that vulnerabilities within the processing functions embedded in the device are exposed over time, and older systems are at risk of being exploited through these vulnerabilities. The risk is perhaps a little more subtle than this. The printer may be perfectly fine, and it would still function precisely the same way it always works. It's just that it may also have been quietly and invisibly co-opted to be a rabid attack zombie in its copious spare time!

In too many other cases, there's just no ongoing vendor support at all. No patches. No updates. Nothing. It's not that the devices are just perfect, and no maintenance at all is necessary. They're so far from any such ideal, assuming that we have any idea what such an ideal may be in any case. Vulnerabilities are uncovered on an ongoing basis. Some are as simple as exploiting usernames and passwords that were loaded into a device at manufacture. (admin/admin123 and username/password are, depressingly, still common, and anyone who did a jailbreak on an iOS device must remember the root/alpine credential combination). Other vulnerabilities come from third-party software libraries packaged into the device, and these days every system is built mainly on a disparate collection of third-party libraries. Other vulnerabilities are more insidious, resulting from subtle interactions between code and data that push the device into an unanticipated state. All this means that without ongoing support from the vendor, the task of keeping a system up to date concerning currently known vulnerabilities is a close to impossible task. If neither the vendor nor the consumer can upgrade or even manage the device, then this is where we are at our most vulnerable. If these devices are widely deployed, unmanageable and vulnerable to hostile manipulation and control, the results can be truly catastrophic. We need to look no further than the Mirai botnet attack of October 2016 that caused a few hours of massive disruption in the United States. You'd probably like to think that maybe we've learned from this, and we no longer use devices on the network that are absolutely wide open for hostile exploitation. But, of course, that's just not the case. It's far easier and, of course, far cheaper to just forget about such risks and press on!

What should we do about it?

Maybe the US Congress is doing the right thing in trying to impose some minimum standards of post-sale vendor support for devices. Maybe we go further and adopt a regulatory framework that bans the sale of devices that are not upgradeable in the field.

Ignoring the obvious issues of national jurisdictions and the ease of global shipping of goods, there are still issues with this regulatory approach. Devices don't get upgraded for a myriad of reasons. One is that silicon does not age so readily. It's not at all unusual to see fully functioning hardware platforms that are more than 30 years old. For how many years should a vendor be required to provide upgrades to devices that they sold? And if the vendor goes out of business for any reason, then who takes over the support role for the equipment? What's the economic model that provides sufficient incentive for a vendor to continue to provide ongoing software maintenance for legacy systems that were last sold two or three decades ago? Is this a case of legislating a standard of behavior that is impossible for the industry to achieve?

For example, in the router business, Cisco was the dominant vendor for many years. Router hardware was built to last, and it is not surprising to find equipment in the field that is more than 20 years old. If you look at a workhorse like a 7200 router, which Cisco introduced in the mid-1990s and stopped selling in 2012, there is a considerable legacy issue. Cisco retired the system from maintenance patches a year ago, but of course, there is still a sizeable population of devices out there that are now operating in unsupported mode. Equally, there is still a robust market for these devices, even though there is no ongoing vendor support. From the vendor's perspective, a legacy support timeframe of 7 years after the last sale is probably an anomaly in our industry, and shorter timeframes are more the rule than the exception. Maintaining support for a further seven years after the product was no longer being manufactured and sold is perhaps as much as could reasonably be asked.

Other vendors retire support at a more rapid pace. Apple released iOS 10 in late 2016, and the last software patch was released in July 2019. The situation with Android is perhaps direr. The hardware vendors are placed in the position of being responsible for the provision of Android software updates to their platform, and this responsibility is only partially transferred to mobile network operators. The result is that the dominant operating system platform out there on the Internet is supported only in a manner that can best be described as somewhere between piecemeal and none at all!

No doubt, many vendors would like to solve this service problem by reducing the useful service lifetime of the product even further and providing almost irresistible incentives to consumers to replace their device every year, and simply avoid the entire field upgrade support issue as a result. However, such short useful product service lifetimes generate their own issues. We sell some 1.5 billion smartphones per year, and which a short service lifetime, we are also generating some 1.5 billion items of retired smartphones each year, generating its own issues of copious quantities of e-waste. This is a problem that just gets larger every year.

All this is getting ugly. We have ongoing issues with maintaining resiliency in these devices in the face of the increasing complexity of service and function that we are cramming into them. The Android operating System has some 12 million lines of code. Windows 10 is reported to have 50 million lines of code. And that's just the platform. Apps add to the burden, of course. The Facebook app comes in at some 20 million lines of code. Maintaining such vast code bases is challenging in any environment. It seems like we are operating all this digital infrastructure by just keeping a few millimeters ahead of the next big problem. These common code bases are deployed in billions of devices. And the support arrangements we currently use for the scale of deployment are, on the whole, just not coping.

If we really don't have a good understanding of how to operate a safe and secure digital infrastructure within the current computing environment with its diversity of support and security models, its mix of proprietary and open-source code bases, and the increasing complexity of the roles where we deploy such devices, it feels like sheer madness to then adopt a model of truly massive production and relentless cost shaving to embrace the world of the Internet of Things. At some point along this path, this Internet of Trash becomes so irretrievably toxic that we can't keep it going any longer!

It's impossible to just give up and walk away from the entire mess. While it took many decades for the digital world to permeate into all aspects of our lives, we are now well beyond the point of no return. Mere convenience has turned into complete dependence, and we are stuck with this for better or worse.

So we are back to the quite conventional view of the public sector's role in intervening in markets where there is a clear case of market failure. In this case, the industry has become trapped in a vicious cycle of producing large volumes of product within a cost-constrained environment where concepts of the quality and robustness of the product are secondary to considerations of time to market and cost of the product. The conventional response to such situations is invariably one of imposing a minimum set of product standards on the industry that ensure the utility and safety of the product. Perhaps the real question is why it has taken so long for us to realize that such regulatory measures are as necessary in the digital world as they are in food safety and the airline industries?

"Move fast and break things" is not a tenable paradigm for this industry today, if it ever was. In the light of our experience with the outcomes of an industry that became fixated on pumping out minimally viable product, it's a paradigm that heads towards what we would conventionally label as criminal negligence. And suppose the industry is incapable of making the necessary changes to create sustainable and safe products under its own volition. In that case, the intervention through regulatory standards is not only reasonable, but it is also necessary.

Written by Geoff Huston, Author & Chief Scientist at APNIC | 06-Feb-2021 01:16

My Experience With Starlink Broadband, It Passes "Better Than Nothing" Beta Test

May become the access answer for many at the end of the road.

The icicle dripping dish in the picture is the antenna for Starlink, a satellite-based broadband service from SpaceX — one of Elon Musk's other companies. It came Saturday, just before the snow arrived here in Stowe, VT. It's heated, so I didn't have to shovel it out, and it's working despite its frozen beard.

The pandemic has shown us that it is socially irresponsible to leave any family without broadband access. That lesson hasn't been lost on our elected representatives. Gobs of money are going to broadband in the next year. Gobs of money have gone to broadband before with disappointingly slow progress towards universal access.

Depending on how quickly SpaceX can scale the service from today's limited availability and fix Beta reliability issues, Starlink could be an Internet answer for many currently unserved and underserved locations in America and around the world before the end of 2021. It's spookily easy to install; it's blazingly fast compared to anything but a fiber connection. It's more than adequate for email, uploads, and downloads today. It's adequate for streaming. Frequent short interruptions, planned and unplanned, make it unstable for video conferencing and Voice over IP (VoIP). These are plainly disclosed in the marketing information for the Beta and should be fixed in the months to come, but seeing will be believing.


Starlink pretty much installs itself. It's fun to watch. You put the dish in its stand (hidden under the snow), run a wire into your house (that was the hard part for me), plug it in, stand back and watch the dish search the sky, and orient itself to the proper position. It quickly figures out where it is and downloads the current satellite schedule from the first satellite it talks to. Once positioned, the dish stops moving, and its electronics to passing satellites. It won't work without a good view of the northern sky in the northern hemisphere). Roof and long pole mount kits are available.

You use an app on your smartphone to set the id and password for your network, and you're online. The same app tells you how well the service is doing, as shown in the screenshot.


Starlink is blazing fast. I've been getting speeds between 30 and 130 Mbps (Megabits per second) for downloads and between 20 and 40 Mbps for uploads. This leaves DSL and the older geostationary-satellite based services in the dust. It is faster than you can get from most wireless ISPs and compares favorably with most cable and/or fiber services. Other than running a server farm in your basement or minting bitcoin, this is all the speed you could possibly need for work from home today. Obviously, you can stream many different videos to many different devices at this speed at the same time.


Latency is the time it takes for a message (technically an IP packet) to get to a server somewhere and for the reply to get back to you. If latency is high, web pages build very slowly and, even more important, voice over IP (VoIP) has very poor quality, and videoconferencing may be impossible. Latency is the Achilles-heal of geostationary satellite services like HughesNet. Their satellites are so far away that each packet takes a long time even at the speed of light to go up and down. Starlink uses low earth orbit satellites (LEOS), so the signal has a negligible distance to travel. Typical latency is between 40 and 60 milliseconds, plenty good for conferencing, gaming, and even most high-frequency stock trading.


Starlink says: "During Beta… there will ... be brief periods of no connectivity at all." Right now, my experience and that of most other users I've heard from is that there are brief periods of non-connectivity about three times per hour. These averaged 18 seconds in a twelve-hour test I ran. According to the app, in the last twelve hours, my dish's view of the sky was obstructed for four minutes, I lost four minutes due to Beta downtime, and there were 15 seconds when no satellites were available. However, measured from my computer, there was more downtime than just eight minutes and fifteen seconds. For comparison, I got about one interruption per hour with an average duration of 6 seconds while using my wireless ISP.

These outages aren't noticeable while doing email and file transfer and web surfing. They usually don't interfere with streaming video, but they do keep me from using my Starlink connection for Zoom or Skype.

It's quite possible that, once I can dig my dish out, I'll be able to move it to solve the obstruction problem. It may also be solved for me as Starlink launches more satellites. They launched 60 more yesterday on one of SpaceX's reusable rockets and are planning another 60 Saturday! Beta services require adjustment, so it's credible but not certain that these interruptions will be gone by summer.

In a weather emergency, satellite-based services will stay up when lines and poles topple. As long as you have power, you'll have connectivity. Even when cellular fails because the towers have fallen down or run out of standby power, solar-powered satellites will be happily spinning.


To take part in the Beta, I had to buy a $499 dish and other equipment (actually $581.94 with tax and shipping) and agree to pay $99/month plus taxes for service. However, that's less than the price of most smartphones and not much more than cellphone monthly charges. There is no contract, and there is 30-day no-fault money-back guarantee on the equipment. I expect there will be higher and lower prices available for different tiers of service and that competition will bring the equipment cost down. Richard Branson is also launching tiny satellites, although he has no service based on them yet, and Jeff Bezos says he will have such a service.

Future proofing:

Absent the outages, Starlink is more than adequate for most home and work from home use today. However, some fiber providers are offering gigabit service (1000Mbps) today. The fact that these speeds are being sold means that applications will develop that need them. It's not the end of the world if you have to scrap a $600 investment in a few years, but will Starlink be able to keep up? They say:

"During beta, users can expect to see data speeds vary from 50Mb/s to 150Mb/s and latency from 20ms to 40ms in most locations over the next several months as we enhance the Starlink system…

As we launch more satellites, install more ground stations and improve our networking software, data speed, latency and uptime will improve dramatically. For latency, we expect to achieve 16ms to 19ms by summer 2021."

Elon Musk has talked about being able to offer Gigabit service over Starlink and described the technology that will be used to provide it (basically laser communication between satellites, which is only in the experimental stage now).


There is no doubt that Beta Starlink is "better than nothing." It is also clearly better than the older satellite services. For many, it will be better than available DSL. Starlink is faster than most wireless ISPs, although their technology is improving as well. To use a wireless ISP, you need a good and reasonably close view of their antenna; to use Starlink, you need to see the right part of the sky. Location will often be the decider between these alternatives.

Starlink is NOT better than high-end fiber or fiber/coax offered by cable companies. Where population is sufficiently dense, fiber will remain the connection of choice; I will get a fiber connection as soon as I can. But fiber won't be everywhere. We need connectivity for years, if ever. If Starlink can scale, it can be a big part of bringing all rural America online.

You can find out if you can be part of the beta by clicking CHECK AVAILABILITY.

Written by Tom Evslin, Nerd, Author, Inventor | 05-Feb-2021 20:48

Enriching Know-Your-Customer (KYC) Practices With IP Intelligence

Know-your-customers (KYC) policies aim to minimize the risk of money laundering, bribery, and other types of fraud. While it was originally implemented in financial institutions, companies outside the financial sector have adapted KYC with digital transactions as the primary driver. These days, the approach is enforced by virtual asset dealers, nonprofit organizations, and even social media companies.

The fight against fraud is challenging, but some KYC solution providers have learned to utilize technical information like IP address and device geolocation intelligence as part of their KYC analysis.

Why Is IP Intelligence Vital in the KYC Process?

To answer this question, let us consider the scenario where Client A and Client B downloaded a banking app to open an account. As part of the identity verification process, they would have to upload a photo of their identification card and take a selfie.

The process ensures that the documents are verified and the account applicant is truly the one conducting the transaction. But aside from identity verification, the clients' IP addresses and geolocation are also validated for the reasons identified below.

Check for Suspicious Activities

Among the significant reasons for checking a client's IP address is to ensure that it is not associated with suspicious or malicious activities. For example, if Client A's IP address in our hypothetical scenario is 49[.]234[.]50[.]235, IP intelligence sources, such as the Threat Intelligence Platform, would flag it as malicious.

As a result, Client A's account application would be denied. On the other hand, Client B, whose IP address is 49[.]225[.]140[.]100, would successfully create an account since his address is clean.

Validate the IP Address and Geolocation of Succeeding Transactions

It is important to note that the KYC process does not stop after onboarding. It should follow the client's transactions throughout his or her tenure to protect both him or her and the organization. Every time Client B logs in to his or her account, the KYC guidelines mandate that his or her IP address be checked. So if at one point Client B uses a totally different IP address — say 185[.]220[.]100[.]241 — a red flag would be raised for reasons like those indicated below.

Different IP Geolocation

To recall our hypothetical scenario, Client B initially used the IP address 49[.]225[.]140[.]100 upon signup. The IP address is assigned to New Zealand, specifically in the Takapuna region. IP Geolocation API further revealed that the Internet service provider (ISP) is Vodafone New Zealand and the connection type is cable or digital subscriber line (DSL).

The geolocation of the new IP address 185[.]220[.]100[.]241, however, is Haßfurt in Germany and the connection type is mobile. Has Client B traveled to Germany? The bank should first reach out to him or her before allowing the new IP address access to the account.

Tor Exit Nodes and Other Anonymizers

The Threat Intelligence Platform listed tor-exit-14[.]zbau[.]f3netze[.]de as the domain resolving to the IP address 185[.]220[.]100[.]241. Banks and other financial institutions generally block Tor exit nodes, virtual private networks (VPNs), and other anonymizers as part of their anti-money laundering protocols.

The policy came about after a study by the Financial Crimes Enforcement Network (FinCEN) in 2014, which found that 975 suspicious activity reports filed by banks are connected to Tor exit nodes. The amount lost to possibly fraudulent activity totaled US$24 million.

KYC is more than just identity verification. The process should uncover underlying issues relative to the client's past and present sessions or transactions as well. With IP intelligence, organizations can discover crucial information about every user. The mandates of the KYC policy answers these specific questions:

  • Where is the user located?
  • Does the user's location differ from previous sessions?
  • Is he or she using a new device?
  • Is the user hiding behind a Tor exit node or VPN?

By answering these questions, KYC solutions can help protect both the account owners and the organization.

Are you a cybersecurity researcher, KYC solution provider, or security product developer? Contact us to learn more about the IP and threat intelligence sources used in this post. We are also open to security research collaboration and other ideas. | 04-Feb-2021 02:57

Clarivate Domain Survey Reveals a 10% Increase in Cyberattacks

Impact of Domain Attacks – Almost a third (31%) of organizations have experienced a data breach involving their domains in the last 12 months. MarkMonitor Special Report: The Growing Role of Domains in IP

Clarivate has once again surveyed global business leaders about the importance of domain names to their organizations, including the role of domains as intellectual property (IP) assets. The 2020 survey followed up on our 2019 survey, revealing key year-over-year trends in how organizations manage, secure and budget for domain names. In this blog, we review key trends from the new report. Read more here.

Cybersecurity is top priority

Organizations reported an increased prevalence of cyber-attacks targeting their domains, with many reporting losses to data, revenue and reputation as a result. Unsurprisingly, security was also reported as the biggest challenge when managing a domain portfolio. Overtaking "promoting new products or services" which topped our 2019 survey, "mitigating brand abuse" is now the biggest factor motivating domain management strategies. Our report highlights the top security tactics organizations are deploying as best practices for domain protection.

Domain names are considered valuable IP assets

Survey results show an increasing number of organizations actively monitor the value of their domains, with the vast majority of respondents characterizing domains as an important part of their IP portfolio, on par with patents and trademarks.

Domain portfolio size, complexity and spend continue to grow

Reflecting the increased need for security and increased value attributed to domain names as IP assets, 2020 saw continued growth in the number of organizations using new generic Top-Level Domains (new gTLDs), an increase in overall domain portfolio size, and a resulting increase in total domain spend. With growing portfolio size, importance and complexity, organizations increasingly report making domain decisions by committee.

Written by Brian King, Head of Policy and Advocacy, Intellectual Property Group at MarkMonitor | 03-Feb-2021 19:34

The Netizen's Guide to Reboot the Root (Part II)

Rampant dysfunction currently plagues the Internet's root zone where a predatory monopolist has captured ICANN and is bullying stakeholders. This harms the public interest and must be addressed — here's how.

Introduction: Why the Internet Needs Saving Now

The first part of this series explained how Amendment 35 to the NTIA-Verisign cooperative agreement is highly offensive to the public interest. But the reasons for saving the Internet are more fundamental to Western interests than a bad deal made under highly questionable circumstances.

One of the world's foremost experts on conducting censorship at scale, the Chinese Communist Party's experience with the Great Firewall — which requires censorship checkpoints at each physical place where data flows across China's national border — has offered object lessons to all would-be tyrants that inefficient censorship is unideal and easily circumvented with readily-available software tools. It would be foolish to believe that China and others aren't looking at ways of improving the efficiency and efficacy of censorship — including by looking further up the Internet "stack."

The Internet is the global communications medium but, as a distributed network of networks that is designed to be massively redundant and where participation is entirely voluntary, there is no single point of failure and, consequently, no single choke point for controlling content with censorship. The next best thing, however, is a centralized domain name registry that offers the ability to not just block undesirable content, but to make it cease to exist.. This would make circumvention a moot point by leaving nothing to be accessed by getting around content blocking.

Doubters should consider Amendment 35 to the .com cooperative agreement where, amidst a terribly awful deal, the Commerce Department insisted that .com remain content-neutral. This clearly recognizes the threat of registry-level interference with content. Some engineers and others argue that users can point somewhere else if censorship becomes an issue. But this is a mackerel in moonlight that both shines and stinks at the same time. This is because, while technically true, the reality is that changing a single domain name is an extremely rare occurrence because of the massive effort and expense required to change habits, programming, and learned behaviors.

Just consider the recent history of over a thousand new top-level extensions and non-ASCII Internationalized Domain Names IDNs — such as Cyrillic, Hebrew, and Hangul scripts — which have required a significant Universal Acceptance effort that is still ongoing. Only in an ivory tower would changing a major domain name registry or the entire DNS be thought of as feasible. Even more alarming is to consider whether we would even know that such a change had become necessary.

It may be that we have already witnessed a major attempt to take control of a populous and popular legacy registry: Ethos Capital's attempt to buy control of the .org registry from the Internet Society (ISOC) in a closed-door transaction worth more than a billion dollars. Even at this lofty sum, .org wasn't being sold to the highest bidder — rather, it was being sold to whomever ISOC felt like selling it to. The transaction failed when ICANN's board unexpectedly declined to approve the change of control that is required by the .org registry agreement. Several former senior ICANN executives were involved in the transaction — including ex-CEO Fadi Chehade who was presented a consultant to the deal but, it later was revealed, is actually co-CEO of Ethos Capital. Meanwhile, the sources of funding for the acquisition — nearly a billion dollars — remain shrouded in mystery.

With that in mind, let's get to the second part of saving the Internet in three simple steps.

Ctrl-X: Delete Presumptive Renewal From Registry Agreements

In addition to addressing the terribly awful Amendment 35, the U.S. government also must step in and do what ICANN cannot do for itself since becoming neutered by terms of a 2006 settlement agreement that ended litigation with Verisign. This agreement established Verisign as the de facto .com registry operator in perpetuity in exchange for a pittance paid to ICANN. This may be the first lopsided quid pro quo between ICANN and Verisign — the original sin, perhaps — although any casual observer knows that it wasn't the last.

Following this 2006 settlement, ICANN became unable to counterbalance its contracted parties effectively. This became even more, the case after presumptive renewal became a standard feature of DNS registries and included in every registry agreement. Without the threat of Armageddon — namely, termination of rights to operate a registry — there is no meaningful oversight and a legitimacy gap has formed in the cavity left by the resulting accountability vacuum and which is poisoning everything.

Consider that presumptive renewal is usually seen with utilities, and the rationale is straightforward: a utility company is granted a monopoly in exchange for active government regulation, particularly on pricing; a utility makes large infrastructure investments in exchange for the assurances provided by presumptive renewal. However, this equation doesn't work without active regulatory oversight, and the whole edifice becomes rotten when infrastructure investment isn't ongoing but, as Klaus Stoll recently reminded readers of Capitol Forum, was amortized and depreciated long ago.

It is worth noting that governance integrity often deteriorates slowly as most people operate by established rules from habit, and it takes time for the mice to catch on that the cat's away. Also, these mice aren't stupid, and they know their interests aren't served by drawing attention to the anti-competitive bacchanal of profiteering that is occurring in the absence of appropriate oversight. Then, factor in that the root is arcane and technical, that there were no known operational failures, and that many are complicit by obscuring deliberately what has been going on, and that's how time flies to 2021, fifteen years after ICANN became a eunuch when everything seems broken, and everybody is pissed off, but nobody knows how to fix the problem — or maybe even why they're pissed off in the first place.

The U.S. government should incubate a renewal of governance legitimacy by asking a federal judge to find that presumptive renewal is inherently anti-competitive and that such language should be removed from registry agreements. Precedent for this is found in a 2010 ruling by the 9th Circuit of the U.S. Court of Appeals, in CFIT v. Verisign, which found that the .com registry agreement's presumptive renewal combined with the power to increase prices by up to 7% in four out of every six years plausibly indicates an anti-competitive conspiracy.

ICANN has repeatedly taken the position recently that it is not a price regulator. This is just about the biggest load of baloney and completely contradicts the premises under which ICANN was formed and the expectations that were set by the U.S. government and others as to the oversight role that "NewCo" was supposed to play in the DNS. ICANN's position has nothing to do with benefitting the public interest and everything to do with the reality that it is an organizational eunuch that overlooks its oversight responsibilities because presumptive renewal leaves it without any real enforcement mechanism. In short, it is cowed by fear of further predatory litigation by Verisign and, by seeking to avoid it, hangs the public interest out to dry.

No bueno.

There is an argument to be made that non-legacy registries face significantly more competitive operating environments and, as such, should maintain presumptive renewal in their agreements with ICANN. This author takes no position either for or against such an eventuality. However, the current presumptive renewal that is in all registry agreements is indelibly tainted by the circumstances in which Verisign and ICANN settled litigation in 2006. For presumptive renewal to have any legitimacy moving forward, it must be the product of deliberate community policy development following a judicial ruling that finds presumptive renewal combined with pricing power to be inherently anti-competitive and voids it from registry agreements.

It should be noted that one or more private parties can attempt to accomplish this by pursuing legal action against Verisign and ICANN — as noted earlier, it has been done before in CFIT v. Verisign. Significantly, a private party class-action lawsuit can seek to be awarded monetary damages. The Internet Commerce Association has estimated that registrants overpay for .com domain names by $1 billion every year. Since antitrust laws allow for awarding up to four years of damages trebled, a quick non-legal, back-of-the-envelope calculation reveals that a private party class-action lawsuit on behalf of all .com registrants might potentially seek damages somewhere in the neighborhood of $12 billion.

U.S. Senator Everett Dirksen is rumored to have once said, "a billion here, a billion there, sooner or later you're talking about real money" — perhaps the germane question is: will the next CFIT please stand up?

However, the downside of a private party suit is that potential litigants will do as CFIT did when they brought an antitrust suit a decade ago — settle and leave the job unfinished. This would be dangerous, and the root zone of the global Internet is too important to be left to the vagaries of private party class-action litigation. The U.S. government bears the responsibility for renewing root zone governance because it originated the Internet, installed the system of governance, indulged it, and ignored it, before unleashing the resulting warped monstrosity into an unsuspecting world after Fadi flew in on his magic carpet and cooked up some NetMundial nonsense in Brazil with now-disgraced Dilma.

Contrary to a recent commenter, who wrote, "if it ain't broke don't fix it," more appropriate here is the old saying, "you break it, you buy it." While the U.S. Government didn't necessarily break DNS governance, per se, it more than any other set the circumstances in which the dysfunction persists. It should act to fix the mess by hitting Ctrl-X and removing presumptive renewal from the root — Internet freedom may very well hang in the balance.

Stay tuned for the third simple step for saving the Internet — Ctrl-O: Open the Internet's Largest Registry to Market Competition. Also, be on the lookout for a post uncovering mysterious events from this past September that involved inexplicable and wild pricing adjustments, disappearing domain name registrations, and altered WHOIS records — all related to .com transliterated IDNs. For a sneak peek, check out Z.??? at — the full story is coming up next!

Written by Greg Thomas, Founder of The Viking Group LLC | 03-Feb-2021 19:25

Freedom of Expression Part 5: COVID Vaccines not Mandatory

In Part 4 of the Freedom of Expression series, I had highlighted my concerns about the lack of transparency in ingredients of all the COVID-19 vaccines, which has been addressed by Council of Europe's Parliamentary Assembly, the same day (World Holocaust Day) I had raised these concerns.

A recent Resolution by the Parliamentary Assembly of the Council of Europe will see the further regulation of social media on content relating to COVID-19. The Parliamentary Assembly of the Council of Europe (PACE) is one of the two statutory organs of the Council of Europe. It is made up of 324 parliamentarians from the national parliaments of the Council of Europe's 47 member states.

The issue that surfaces from the Resolution pertaining to the Internet governance are who defines what "misinformation" and "disinformation" is in this context. Arguably there is content which censorship entities have classified as "hate speech," which has fallen into either of the above categories. Notably, to date, Council of Europe's Steering Committee on Media and Information Society (CDMSI), which is vested with the important mission to steer the work of the Council of Europe in the field of freedom of expression, media and Internet governance, is currently accepting submissions for consideration before the Ministerial recommendation is prepared in terms of approach to hate speech, including the context of the online environment by the 16th February 2021.

In its 5th Sitting this year on 27th January 2021, the Parliamentary Assembly passed Resolution 2632 (2021) on COVID-19 vaccines; ethical, legal and practical considerations.

Whilst noting that the rapid deployment of safe and efficient vaccines against COVID-19 would be essential to contain the pandemic, protect health care systems, save lives and restore global economies and highlighting the need for international cooperation, adequate vaccine management and supply chain logistics, it encouraged Member states to prepare immunization strategies to allocate doses in an ethical and equitable way deferring to bioethicists ad economists on distribution models should the vaccine be scarce.

Of significance, is the resolution that the vaccine must be a global public good with immunization be made available to everyone, everywhere and that with respect to the COVID-19 Vaccines to ensure the following:

  • that high-quality trials that are sound and conducted in an ethical manner in accordance with the Convention on human rights and biomedicine (ETS No. 164, Oviedo Convention) and its Additional Protocol concerning Biomedical Research (CETS No.195); (see 7.1.1)
  • that regulatory bodies in charge of assessing and authorizing vaccines against COVID-19 are independent and protected from political pressure; (see 7.1.2)
  • pay special attention to possible insider trading by pharmaceutical executives or pharmaceutical companies unduly enriching themselves at public expense by implementing the recommendations contained in Resolution 2071 (2015) on Public health and interests of the pharmaceutical industry: how to guarantee the primacy of public health interests? (see 7.1.6)
  • To put in place systems to monitor the long term effects and various other measures intended to safeguard the global public interest.

In a time where regulatory bodies in charge of assessing and authorizing vaccines against COVID-19 in jurisdictions like the United States of America have lost their independence with regulators being allowed to patent discoveries and abuse their positions in pushing their interests and where the US Congressional Parliament has issued indemnification for Pharmaceutical companies.

It was in 1950 when President Truman signed the Executive Order 10096, which has since enabled Federal employees to hold patents developed on Federal government time (Green, J. Government vs. the Federal Employee: Who Owns the Patent?, Letter from the Office of General Counsel). Federal courts have upheld this, see Heinemann v. the U.S., 796 F.2d. 451 (Fed. Cir. 1986) cert. denied, 480 U.S. 930 (1987). President Truman's Order was subsequently modified by President Kennedy transferring jurisdiction of patent determinations from the Government Patents Board to the Secretary of Commerce, and in 1988, as a result of President Reagan's initiative to transfer government technology to the private sector and to encourage federal employees to generate inventions, the US Commerce Department issued regulations (37 C.F.R. part 501) which addressed federal government ownership rights to patents developed by federal employees (supra).

The Supreme Court of the United States concluded that federal law protects vaccine makers from product-liability lawsuits that are filed in state courts and seek damages for injuries or death attributed to a vaccine see Brueswitz et al v Wyeth LLC, FKA Wyeth, Inc., et al , 562 U.S.223, 131 S.Ct.1068 (2011). The Brueswitz family had brought a lawsuit against Wyeth (now Pfizer) where they argued that their daughters suffered seizures after her third dose of the DTP vaccine in 1992 and that a safer alternative had been available but not made available to them. The vaccine was taken off the market in 1998 and replaced. Pfizer argued that a Supreme Court ruling favoring the family would have sparked countless lawsuits, threatening the supply of childhood vaccines. One of the rationales of the Supreme Court's decision came from the fact that vaccine manufacturers fund their sales from an informal, efficient compensation program for vaccine injuries. The Supreme Court explained that the National Childhood Vaccine Injury Act of 1986 (NCVIA or Act) preempts all design-defect claims against vaccine manufacturers brought by plaintiffs seeking compensation for injury or death caused by a vaccine's side effects.

We see the distinction between the Council of Europe and the United States in their approach, whether it is their approach to the treatment of corporates, taxation, safe harbor principles, and in this case arriving at producing a global public good without compromising the global public interest.

Of note also in Council of Europe's Parliamentary Assembly Resolution 2632 (2021) on COVID-19 vaccines; ethical, legal and practical considerations is the right to refuse the vaccine and not to be pressured in taking it not discriminated in not taking the vaccine.

The Parliamentary Resolution is also clear about ensuring:

  • that citizens are informed that the vaccination is NOT mandatory and that no one is politically, socially or otherwise pressured to get themselves vaccinated if they do not wish to do so themselves (see 7.3.1)
  • ensure that no one is discriminated against for not having been vaccinated, due to possible health risks or not wanting to be vaccinated (see 7.3.2)
  • take early effective measures to counter misinformation, disinformation and hesitancy regarding COVID-19 vaccines;
  • distribute transparent information on the safety and possible side effects of vaccines, working with and regulating social media platforms to prevent the spread of misinformation (see 7.3.4)
  • communicate transparently the contents of contracts with vaccine producers to make them publicly available for parliamentary and public scrutiny (see 7.3.5)

Other things being equal, I am glad that the Council of Europe has made it mandatory for pharmaceutical companies to publish and disclose all ingredients, side effects and pursuant to Resolution 2337 (2020) that parliaments are playing their triple role of representation, legislation and oversight in pandemic circumstances. What concerns me, though, is the increasing online censorship and the abuse thereof.

Note: For those who wish to make submissions to The Committee of Experts on Combating Hate Speech (ADI/MSI-DIS), send your submissions to CDMSI Secretariat by 16th February at the latest.

Written by Salanieta Tamanikaiwaimaro, Director | 02-Feb-2021 22:35

A Patchwork Quilt: Abuse Mitigation, the Domain Naming System and Pending Legislation

A few weeks ago, Appdetex published a blog with predictions for 2021, and admittedly, at the date of publication, there were already very clear indications that one prediction was already in flight.

In our blog post, we'd said, "With the global domain name system failing to abate abuse, and, in fact, thwarting consumer protection, get ready for a patchwork of local laws targeting attribution and prosecution of bad actors… Get ready for some confusion and turmoil in the world of notice and takedown related to local laws and regulations."

Since May 2018, it's been harder for brands to mitigate consumer harm resulting from infringing domain names. A recent study from Interisle Consulting bears this out: WHOIS data has gone from being over 75% available to just above 13% since the implementation of ICANN's reaction to GDPR. It should come as no surprise, then, that regulators in the US and EU are poised to take action to try to protect consumers. Late last year, both the United States (US) and European Union (EU) governments had already begun to act to make access to domain name registrant contact data (WHOIS) more available to protect consumers and internet users.

In their proposed Revised Directive on Security of Network and Information Systems (NIS2), the EU has become much more specific about intermediaries' obligations, including Domain Naming Service (DNS) providers. The legislation would specifically identify hosts, DNS providers, TLD registrars, and registries as being part of the solution and is expected to mandate that they act swiftly to mitigate consumer risk and balance privacy and harm more carefully. Public Stakeholder comments have been gathered, and now the EU is readying the legislation for adoption and its implementation by member states.

Meanwhile, buried in the thousands of pages of the omnibus Consolidated Appropriations Act of 2021 is an instruction to the National Telecommunications and Information Administration (NTIA). The NTIA is "directed," through their position on the Internet Corporation for Assigned Names and Numbers (ICANN) Government Advisory Committee, to "work with ICANN to expedite the establishment of a global access model that provides law enforcement, intellectual property rights holders, and third parties with timely access to accurate domain name registration information for legitimate purposes." There are also rumblings about stand-alone legislation to protect consumers by making WHOIS data more accessible.

While these actions seem both relevant and helpful on their surface, it represents a failure in ICANN's ironically-named expedited policy development process (EPDP) that these legislative solutions need to be taken. ICANN's glacial-paced EPDP has yielded very little in the past two years. The EPDP has proposed a guideline for implementing a toothless tool investigating and abating abuse. This guideline, judged as so useless that even EPDP participants from some constituencies voted against its implementation, will likely require two or more years to implement.

Had ICANN and the multi-stakeholder community acted in a balanced manner to protect both privacy and the internet from harm by bad actors, it would have been unnecessary for governments to act. Sadly, it is now up to brands to do their best to remediate consumer abuse and harm without the help of the governing body of the internet. Worse still, a patchwork of legislation will be yet another complication for both brands and DNS participants.

Written by Frederick Felman, Chief Marketing Officer at AppDetex | 02-Feb-2021 21:06

Radix's H2 2020 Premium Domains Report

We are excited to share our bi-annual premium report for the second half of 2020. This report gives a full overview of the premium domain sales across our nTLD portfolio from 1st July 2020 till 31st December 2020.

Some of the key highlights from the report include:

  • Total premiums retail revenue amounted to over $2M; which is a 2.3% growth over H1 2020 at $1.96M.
  • A total of 1,194 new names were sold while 1,160 were renewed.
  • .TECH, .ONLINE and .STORE were the highest-grossing TLDs with respect to premium sales.
  • New premium registrations contributed to ~40% of the total premiums revenue for this period
  • Highest new premium registrations can be attributed to GoDaddy at 40% of total premium sales
  • Over 50 domains got registered with a tier price of $5000 or more

Take a look at the detailed report here. | 01-Feb-2021 23:57

Limitations and Laches as Defenses in Domain Name Cybersquatting Claims

UDRP Paragraph 4(c) states as a preamble that "[a]ny of the following circumstances, in particular, but without limitation, if found by the Panel to be proved based on its evaluation of all evidence presented, shall demonstrate your rights or legitimate interest to the domain name for purposes of Paragraph 4(a)(ii)." Three nonexclusive circumstances are listed. There is no specific mention of equitable defenses, and in fact, no length of delay in commencing a proceeding alone blocks forfeiture of the challenged domain name.

The consensus on this issue is long-standing. It finds expression in many decisions reaching back to the earliest cases. The New York Times Company v. New York Internet Services, D2000-1072 (WIPO December 5, 2000) (. "Even if NYIS's contentions as to laches had been properly pleaded, it is apparent that NYIS could not prove laches." Later cases include Lowa Sportschuhe GmbH v. Domain Admin, Whois Privacy Corp., D2017-1131 (WIPO August 1, 2017) () and Mile, Inc. v. Michael Burg, D2010-2011 (WIPO February 7, 2011) (Panels have "generally declined to apply the doctrine of laches.") In Pet Plan Ltd v. Donna Ware, D2020-2464 (WIPO November 30, 2020) (, the Panel recognizes that the [laches] issue has been addressed in previous URDP decisions and that the doctrine or policy of laches or estoppel has not been applied to proceedings under the URDP."

While the absence of a limitation period does not necessarily result in respondents forfeiting their domain names, the consensus is that equitable defenses are not applicable in defending against cybersquatting claims. This raises an important issue, namely what evidence is necessary, or what from the record can be inferred, that would support granting or denying the complaint. Clearly, a respondent cannot simply allege limitation or laches; it certainly puts itself at risk by defaulting in appearance.

To start with, delay by itself is a neutral fact. Yet, there are factual circumstances that call out for such a defense, and in fact, the defense is hidden in plain sight in Paragraph 4(c)(i) of the Policy. The Panel in Impala Platinum Holdings Limited v. Domain Admin, Privacy Protect, LLC ( / Domain Admin, Domain Privacy Guard Sociedad Anónima Ltd., D2020-2268 (WIPO November 13, 2020) ( noted that

[i]n certain circumstances, it may be that a respondent can point to some specific disadvantage which it has suffered as a result of a delay by a complainant in bringing proceedings, which may be material to the panel's determination.

One disadvantage, of course, is that in the extended interval, the respondent has built a business; it is using "the domain name in connection with a bona fide offering of goods or services." While the circumstances were not present in Impala Platinum, the concept is a central feature of the jurisprudence (as indeed it is in trademark infringement cases). Where the proof establishes detrimental reliance, respondents retain control of their domain names. The Panel in Dealhunter A/S v. Richard Chiang, D2014-0766 (WIPO July 17, 2014) noted that "[o]pinions have differed on the applicability of laches or delay in UDRP proceedings":

This Panel's view is that delay in filing a complaint is not an automatic bar to a complaint, but nor can it be ignored, for all the facts must be taken into account in all proceedings and a decision made in the light of all the circumstances of the individual case."

Paragraph 4(c)(i) of the Policy states that "[if] before any notice to you" you are "us[ing] the domain name . . . in connection with a bona fide offering of goods or services" your registration is lawful. It is useful to take note that no amount of accusation is sufficient to support a claim of cybersquatting, but it should also not be forgotten that some respondents have lost their domain names in some instances after holding them for over twenty years. This naturally raises serious concerns of bias in favor of trademark owners.

While "delay by itself" is not a defense — noting, however that delay of any long duration may undermine claims of cybersquatting — it is not the only factor in determining the outcome. In passing the baton in April 2000 for combating cybersquatting to ICANN, WIPO recommended that "claims under the administrative procedure [should not] be subject to a time limitation" (Final Report, Paragraph 199). ICANN agreed and the UDRP contains no limitation period for making a claim.

Rather, determining whether complainants "state a claim" depends on the factual circumstances each party marshals in support of its position. In Square Peg Interactive Inc. v. Naim Interactive Inc., FA 209572 (Forum December 29, 2003) (to take one of many examples) the Panel held that "[a]lthough laches by itself is not a defense to a complaint brought under the Policy, Complainant's delay in seeking relief is relevant to a determination of whether Respondent has been able to build up legitimate rights in the Domain Name in the interim, and whether it is using the Domain name in bad faith" (emphasis added).

In AF Gloenco, Inc. v. CT PACKAGING SYSTEMS, INC., FA1805001785831 (Forum June 28, 2018) () the Panel held that the delay "has cemented Respondent's business reliance upon the disputed domain name to conduct crucial online operations and constitutes an implied authorization for that use of the name by Respondent." "Implied authorization" means "acquiescence" (an equitable defense). This holding is not alone. See also Wiluna Holdings, LLC v., Inc Privacy ID# 1100134, FA1805001789612 (Forum July 16, 2018) ("Respondent points to the nine-year delay in bringing legal proceedings. Therefore, the Panel may consider the doctrine of laches as additional evidence towards Respondent.")

In COLAS v. Domain Administrator, Daruna, LLC., D2020-0560 (WIPO June 6, 2020) (), "the Respondent states that the disputed domain name was registered more than 18 years ago and calls for the 'doctrine of laches' to be applied to the case." Although the Panel declined to rule on laches, it nevertheless found that Complainant failed to prove bad faith:

Where PPC links directly target a complainant's rights, this may lead to a reasonable inference, in and of itself, that the domain name used for the corresponding website was registered in the knowledge of such rights and with intent to target them. However, it is not clear from the evidence in the present case that the PPC links do target the Complainant's mark.

The issue of delay and the application of limitations and laches has (not surprisingly!) become obsessive in some quarters, urging a Policy amendment that would have the effect of limiting rights holders' in UDRP proceedings to claims within a declared limitations period. But, what would that period be, and should there be one? Under U.S. trademark law (and this is likely true of other jurisdictions), there is no explicit statute of limitations for trademark infringement or cybersquatting — when the need arises, U.S. federal courts refer to analogous state statutes of limitations in applying the corresponding presumption of laches. Analogous state statutes for limitations are either three or four years, but cybersquatting like trademark infringement is a tort so that the statute of limitations is forever refreshing.

But whatever the limitation may be, it is not conclusive because the Lanham Act also provides for equitable defenses, even against marks that have become incontestable. 15 U.S.C. § 1115(b)(9) provides that a person's right to a mark is subject to "equitable principles, including laches, estoppel, and acquiescence." Courts apply laches to address the inequities created by a trademark owner who, despite having a colorable infringement claim, has unreasonably delayed in seeking redress to the detriment of the defendant. UDRP decisions can be cited in which Panels have applied equitable principles in response to persuasive facts, and when such facts are present Panels have been extra careful in explaining the legal basis for denying complaints even if it is not explicitly for laches.

Understandably, this does not give total comfort to registrants whose business model is monetizing and reselling domain names. No wonder they grow apprehensive when they see Panels awarding generic three-letter domain names to complainants ( and both held by respondents for over 20 years). Their anxiety is magnified each time a complainant is awarded a long-held domain name. There is some comfort in knowing that the consensus on laches is offset by a parallel consensus that unexplained delay will surely have consequences if either 1) complainant lacks proof the domain name was registered in bad faith, 2) the delay is unreasonable and unexplained, or 3) respondent rebuts by demonstration that it has either or both rights and legitimate interests in the domain name.

In a dispute involving (, to take another example, Aqua Engineering & Equipment, Inc. v. DOMAIN ADMINISTRATOR / PORTMEDIA HOLDINGS LTD, FA1805001785667 (Forum June 25, 2018), Respondent vigorously argued that Complainant had the burden of explaining why it had waited so long citing numerous cases including Bosco Prod., Inc. v. Bosco email Servs., FA94828 (Forum June 29, 2000) ("Without determining if the passage of considerable time would alone bar Complainant from relief in this proceeding, the Panel notes that Complainant does not explain why it has waited nearly four years to try and resolve [the domain name dispute]." As it happens, though, as with other cases already cited, the decision did not turn on delay. The Panel denied the complaint for Complainant's failure to pass the "rights" test under paragraph 4(a)(i) of the Policy.

To come within the compass of Paragraph 4(c)(i), respondents must be able to marshal facts and argument for an equitable defense. However, much depends on marshaling the right facts, and this is not a skill in every party's or representative's toolbox or capacity to organize.

There is a substantial body of Lanham Act decisions denying complaints about unexplained or excessive delay. The factors isolated by judges are no less applicable to decision-making in UDRP cases. For laches to apply under U.S. trademark law the defendant must show (1) "that the plaintiff had knowledge" — actual or constructive — "of the defendant's use of an allegedly infringing mark"; (2) "that the plaintiff inexcusably delayed in taking action with respect to the defendant's use"; and (3) "that the defendant would be prejudiced by allowing the plaintiff to assert its rights at this time." Chattanoga Mfg., Inc. v. Nike, Inc., 301 F.3rd 789, 792-793 (7th Cir. 2002). Determining parties' rights "requires a qualitative examination of the parties' words and conduct and an equitable evaluation of the length of the delay and the degree of prejudice to the defendant if the trademark owner's rights are enforced," which "generally requires a factual record." Hyson USA, Inc. v. Hyson 2U, Ltd., 821 F.3d 935, 941 (7th Cir. 2016). Paragraph 4(c)(i) of the Policy embodies part of this concept, "[b]efore notice" but the "prejudice" element must be proved. See, for example, Leet Woodworking, LLC v. Zhong Jinzhang, D2018-1137 (WIPO July 10, 2018) (. "The Complainant brought no evidence showing previous or current sales of products under the BOARD STALL trademark. No evidence was proffered showing a website offering any products under the BOARD STALL trademark.")

This is not to minimize registrants' anxieties over potential unfair decisions transferring long-held domain names acquired for their inherent value. (The long-term holding issue has not emerged thus far with the Uniform Rapid Suspension System because new gTLDs only began arriving in the market in 2014 — there has not been enough elapsed time). As already noted, the outcome in the ADO case (to take one example currently pending in federal court for a declaration under the ACPA) only heightens investor insecurity, although the incidence of divestment is extremely low. Nevertheless, some Panels may find bad faith in registering domain names composed of random letters and common terms regardless of the length of time held. As the time of holding continues to lengthen, it is likely that more long-held domain names will be challenged. Whether they are forfeited to complainants depends on the factual matrix.

Nevertheless (and recognizing there are exceptions) UDRP Panels take into account the same factors examined in federal actions in determining parties' rights. Because the stakes are so high, domain name holders must learn what Panels expect from them. If Panels have to make "qualitative examination" it presupposes domain name registrants build a record, but in many cases, they default, and their silence condemns them. I am thinking of the IMI case, acquired 23 years earlier than the complaint, and there are other examples. Unless registrants curate their websites shrewdly and understand the evidentiary demands for rebutting or proving their rights, they will be vulnerable to losing their assets.

Written by Gerald M. Levine, Intellectual Property, Arbitrator/Mediator at Levine Samuel LLP | 01-Feb-2021 20:14

Freedom of Expression Part 4: Censorship, COVID-19, the Media and Assault on Freedom of Expression

As I write this, it is World Holocaust Day, 27th January 2021, a memorial of the atrocious events that shocked and outraged the conscience of humanity and gave birth to the Universal Declaration of Human Rights in 1948, the year that Holocaust victims majority of whom were Jews re-established the nation of Israel.

As I am sat perched on my chair with coffee on this wintry night, I am also reminded of being in Paris in 2018 on the 100th Anniversary of World Armistice Day, where over 60 Heads of State including the past President of the United States of America, Donald Trump, Germany's Angela Merkel, Canada's Justin Trudeau and Russia's Vladimir Putin joined the French President Emmanuel Macron to reflect on how the Armistice ended the World War 1. The Armistice did not mean peace as there were appalling wars that ensued for several years. Few days after the Armistice, there were protests against fuel taxes and marches by the gilets jaunes (yellow jackets).

On that same day in Paris on Armistice day, my colleagues and I had organized and convened where I chaired a Main Session at the 13th United Nations Internet Governance Forum, titled "Media as a Cornerstone for Peace, Assault on Media Freedom as a Cornerstone for Peace, Assault on Media Freedom and Freedom of Expression" where we had invited experts such as Professor Luz Estella Nagle, former Judge in Colombia who confronted drug lords and corruption in Colombia earlier in her career and an expert in international law, human rights amongst other things, Professor Rasha Abdullah from the American University in Egypt and expert in journalism, Mr Giacomo Mazzone, Head of Institutional Relations of the European Broadcasting Network Union, Facebook India's Ankhi Dass, Digital Rights Shmyla Khan, Dr Yik Chan Chin from the Liverpool University in China to share their reflections on several key public policy questions, see recording and transcripts here.

As I reflect on the passage of time, and the continuous assault on freedoms and liberties where the media has been captured by either State or elite few who control interests in the media to manipulate them to do their bidding. Regardless of whoever is in control, the media continues to be a tool for propaganda and peddling content by the elite who have manipulated censorship of content and inciting hatred. When the media in Western democracies remove the consumer's capacity to choose content and form their opinions and where this has also seeped into social media, we know that it is the last onslaught on freedom of expression as we know it.

In previous pieces, I had highlighted Twitter banning Linehan and since then, outgoing President Donald Trump was banned from Twitter whilst still President on the 8th January 2021. Today, it was reported that Colleen Oefelein, an associate literary agent with New York's Jennifer De Chiara Literacy Agency was dismissed after her boss learned that she owned accounts on Gab and Parler (2021, Apex World News). Mexico's Cardinal Juan Sandoval Iniguez is reported today to have been censored by Facebook for saying in a video that the pandemic will last for several years and for blasting Bill Gates (2021, Apex News). Bill Gates is one of the major shareholders within Facebook, and it would appear that content relating to him, the vaccines, the pandemic are all censored across most social media platforms.

In 2016, Professor Rasha Abdullah had highlighted in a workshop with then UN Special Rapporteur David Kaye that the definition of fake news should be where it has the intention to deceive. On 3rd March 2017, United Nations Special Rapporteur on Freedom of opinion and expression, David Kaye, along with his counterparts from the Organization for Security and Co-operation in Europe (OSCE), the Organization of American States (OAS), and the African Commission on Human and Peoples' Rights (ACHPR) issued a Joint Declaration on Disinformation and Propaganda, see here.

There is increasing censorship by most Social Media platforms forcing people to also choose to be on platforms that do not censor content and promote open and free internet platforms that encourage freedom of expression. The takeover of media and all social media platforms is an aggressive war against the freedom to think, freedom to reason, freedom to compare information, freedom to form and hold opinions and freedom to express oneself.

The assumption that censorship is necessary because people are incapable of differentiating content is as absurd as it is ridiculous, particularly when used as a weapon to take away our freedoms!

Sir Tim Berners Lee, who invented the World Wide Web in 1989 and his World Wide Web Foundation had launched a campaign in November 2018 where he was advocating for diverse stakeholders to back a new Contract to protect people's rights and freedoms, see here. The Contract is of the philosophy that knowledge must be kept free whilst strengthening laws, regulations and companies to ensure "pursuit of profit" is not at the expense of human rights and democracy.

If there is a time where information about the pandemic, vaccine and COVID has to be accessible and transparent, it is now! People have the right to understand the ingredients of the vaccines and its impact on their health instead of having content filtered or censored in the name of public good. Put simply, people have the right to access knowledge and knowledge must remain free.

We have also seen a former Vatican official's open letter to the then President that highlighted a Great Reset Agenda, which was part of a Masonic plot.

To think that 25 years, John Perry Barlow in his Declaration of the Independence of Cyberspace understood the threats by governments and wonder whether, in his declaration, he should have also considered the takeover of these freedoms by non-government entities. If we travel back in time in 2018 in Paris where Professor Luz Estelle Nagle had reminded us that the media is the fourth pillar in a democracy, and she had mentioned way back then that people were losing faith in the media. Professor Nagle had forewarned that the censorship by companies that pay for advertisements was a threat.

Today, as we commemorate World Holocaust Day, we see the same evil that desired to control content then at work trying to control content and censor freedom of expression and civil liberties. Censorship is an evil that possesses those in power or in this information age who influence to cause them to behave in ways that affect us all.

What is increasingly apparent is that in this ecosystem or appearance of democracy, interests have captured key stakeholders in influencing legislation, exploiting the Big Technology companies where they have become more powerful than nation-states. Two days ago, we witnessed the likes of Facebook and Google threatening to pull out of Australia over attempts by the government to require the companies to pay news publishers for articles it links to, see here. That, though, is for another time.

It is a sad day when the media has been weaponized and taken over not only by States as propaganda machines but by the corporate dollar. Those who rise to question these are seen as madmen or fired. Who will guard and defend the freedom of cyberspace, and was it ever really free?

The erosion of our civil liberties is already at our doorstep.

Written by Salanieta Tamanikaiwaimaro, Director | 28-Jan-2021 18:54

Appdetex Accelerates Growth With $12.2 Million Financing Led by Baird Capital

Investment, Expertise, Patent Pending Technology, and Global Reliance on Digital to Propel Appdetex's Expansion

Appdetex, a global brand protection leader and expert in online detection, assessment, and enforcement of online infringements, today announced that Baird Capital has led a $12.2 million Series C financing to fuel the company's growth, team, and market opportunity. Appdetex's existing investors, including First Analysis, Origin Ventures, and EPIC Ventures, also participated in the financing.

Four of the top five World's Most Valuable Brands, along with hundreds of other brands, depend on Appdetex's brand protection technology and expertise to defend their customer relationships, revenue, and reputation. With this additional investment, Appdetex will enhance its patented technologies and grow its sales and service teams to service its fast-growing customer base and a burgeoning list of partners.

"Malicious actors hawking fake sites, apps, ads, or with insidious purposes prey on internet users and try to insert themselves in between customers and the brands they trust," said Appdetex CEO, Faisal Shah. "For the valuable brands we protect, Appdetex mitigates digital channel risks on the internet, across advertising, social media, mobile app and eCommerce marketplaces as well as within other emerging digital channels. With this additional financing, we look forward to continuing to invest in our advanced technology and analytics platform to deliver innovative, multi-channel brand protection solutions for our customers."

"Amidst the COVID-19 pandemic, adoption of digital technologies and services has accelerated, with most consumers and businesses now relying heavily on digital channels for work, school, entertainment, shopping, communication and almost every aspect of their lives," emphasized Jim Pavlik, Partner with Baird Capital and newly appointed Appdetex board member. "Seeing an opportunity to profit on that reliance, malicious actors have increasingly attacked brands and their customer relationships in those same channels. With the cost of this ill-intended activity damaging so many companies, we are excited to partner with the Appdetex team and support their efforts in defending brands and ensuring customer trust in the digital buyer journey."

For more information on Appdetex's mission and products, visit For more information on Baird Capital's venture team and investment strategy, visit | 28-Jan-2021 15:07

SpaceX Is First With Inter-Satellite Laser Links in Low-Earth Orbit, but Others Will Follow

SpaceX initially planned to have five inter-satellite laser links (ISLLs).

SpaceX is willing to subsidize expensive hardware like laser links and end-user terminals in the short run.

When SpaceX first announced plans for Starlink, their low-Earth orbit Internet service constellation, they said each satellite would have five inter-satellite laser links (ISLLs) — two links to satellites in the same orbital plane, two to satellites in adjacent orbital planes, and one to a satellite in a crossing plane. They subsequently dropped the crossing link as too difficult and, when they finally began launching satellites, they had no laser links. Last year they tested ISLLs on two satellites.

Last November, SpaceX requested that the FCC modify their license to allow them to operate 348 satellites at an altitude of 560 km and an inclination of 97.6 degrees in order to serve the polar regions. This month, the FCC postponed their decision on the 348 satellites, but granted SpaceX permission to operate ten satellites to "facilitate continued development and testing of SpaceX's broadband service in high latitude geographic areas" and those ten satellites were launched as part of a 143-satellite rideshare.

That rideshare was a record-setter, but it is more interesting to note that those ten polar-orbit satellites were equipped with operational ISLLs, and Elon Musk confirmed that the remaining 338 would also have ISLLs if approved. In the same tweet, he confirmed that their inclined-orbit satellites would be equipped with ISLLs next year, but only the polar satellites would have them this year.

SpaceX must be confident that the full 348 satellites will be authorized since the first ten, while useful for tests, would not provide meaningful connectivity. My guess is that the polar-orbit satellites will be able to link to the inclined-orbit satellites with lasers when they begin launching next year. (Note that Telesat has applied for a patent on a Dual LEO Satellite System in which polar and inclined-orbit satellites communicate with each other).

How about the other LEO satellite projects?

Telesat plans to launch a hybrid constellation with laser links connecting polar and inclinded satellites and is already working with Mynaric, a German laser communication company, on Blackjack, a LEO constelltion being developed for the US Deparrtment of Defence, so Mynaric may supply the lasers for Telesat's satellites. It's also interesting that Mynaric's US office is in Hawthorne, California, home of SpaceX. OneWeb initially planned to equip their satellites with ISLLs, but they decided not to for cost and political reasons. As far as I know, Jeff Bezos' Project Kuiper has not officially committed to having ISLLs between their satellites, but they are hiring optical engineers to work on the constellation and are planning applications that will benefit from ISLLs. I don't know about the Chinese broadband LEO compaines, but at least one Chinese company Intane, produces space lasers. (Mynaric has withdrawn from the Chinese market due to political pressure).

HydRON connecting the ground, LEO and GEO satellites and deep space.

In the long run, I expect that every LEO broadband provider that survives will be linking their satellites with ISLLs — doing so will lower latency and reduce the need for terrestrial ground stations. Furthermore, I expect we will see ISLLs between LEO, MEO and GEO satellites. Telesat and OneWeb may have the lead on multi-layer links since Telesat is already a well-established GEO satellite communication company, and Hughes is an investor in OneWeb. SES, which operates both MEO and GEO satellites, is an investor in the forthcoming Europen Union LEO constellation and the European Space Agency has a long ISLL history and has recently launched project HydRON, which hopes to demonstrate the seamless extension of terrestrial fiber networks with "fiber in the sky" — a terabit GEO/LEO optical network in space

But, that's the long run. For now, SpaceX is far ahead of the field in nearly every dimension, including ISLL development and deployment. It seems they are willing to subsidize expensive hardware like laser links and end-user terminals while focusing on relatively affluent markets like North America, Europe and Australia in the short run.

Written by Larry Press, Professor of Information Systems at California State University | 27-Jan-2021 22:58

Information Protection for the Domain Name System: Encryption and Minimization

This is the final in a multi-part series on cryptography and the Domain Name System (DNS).

In previous posts in this series, I've discussed a number of applications of cryptography to the DNS, many of them related to the Domain Name System Security Extensions (DNSSEC).

In this final blog post, I'll turn attention to another application that may appear at first to be the most natural, though as it turns out, may not always be the most necessary: DNS encryption. (I've also written about DNS encryption as well as minimization in a separate post on DNS information protection.)

DNS Encryption

In 2014, the Internet Engineering Task Force (IETF) chartered the DNS PRIVate Exchange (dprive) working group to start work on encrypting DNS queries and responses exchanged between clients and resolvers.

That work resulted in RFC 7858, published in 2016, which describes how to run the DNS protocol over the Transport Layer Security (TLS) protocol, also known as DNS over TLS, or DoT.

DNS encryption between clients and resolvers has since gained further momentum, with multiple browsers and resolvers supporting DNS over Hypertext Transport Protocol Security (HTTPS), or DoH, with the formation of the Encrypted DNS Deployment Initiative, and with further enhancements such as oblivious DoH.

The dprive working group turned its attention to the resolver-to-authoritative exchange during its rechartering in 2018. And in October of last year, ICANN's Office of the CTO published its strategy recommendations for the ICANN-managed Root Server (IMRS, i.e., the L-Root Server), an effort motivated in part by concern about potential "confidentiality attacks" on the resolver-to-root connection.

From a cryptographer's perspective the prospect of adding encryption to the DNS protocol is naturally quite interesting. But this perspective isn't the only one that matters, as I've observed numerous times in previous posts.

Balancing Cryptographic and Operational Considerations

A common theme in this series on cryptography and the DNS has been the question of whether the benefits of a technology are sufficient to justify its cost and complexity.

This question came up not only in my review of two newer cryptographic advances, but also in my remarks on the motivation for two established tools for providing evidence that a domain name doesn't exist.

Recall that the two tools — the Next Secure (NSEC) and Next Secure 3 (NSEC3) records — were developed because a simpler approach didn't have an acceptable risk / benefit tradeoff. In the simpler approach, to provide a relying party assurance that a domain name doesn't exist, a name server would return a response, signed with its private key, " doesn't exist."

From a cryptographic perspective, the simpler approach would meet its goal: a relying party could then validate the response with the corresponding public key. However, the approach would introduce new operational risks, because the name server would now have to perform online cryptographic operations.

The name server would not only have to protect its private key from compromise, but would also have to protect the cryptographic operations from overuse by attackers. That could open another avenue for denial-of-service attacks that could prevent the name server from responding to legitimate requests.

The designers of DNSSEC mitigated these operational risks by developing NSEC and NSEC3, which gave the option of moving the private key and the cryptographic operations offline, into the name server's provisioning system. Cryptography and operations were balanced by this better solution. The theme is now returning to view through the recent efforts around DNS encryption.

Like the simpler initial approach for authentication, DNS encryption may meet its goal from a cryptographic perspective. But the operational perspective is important as well. As designers again consider where and how to deploy private keys and cryptographic operations across the DNS ecosystem, alternatives with a better balance are a desirable goal.

Minimization Techniques

In addition to encryption, there has been research into other, possibly lower-risk alternatives that can be used in place of or in addition to encryption at various levels of the DNS.

We call these techniques collectively minimization techniques.

Qname Minimization

In "textbook" DNS resolution, a resolver sends the same full domain name to a root server, a top-level domain (TLD) server, a second-level domain (SLD) server, and any other server in the chain of referrals, until it ultimately receives an authoritative answer to a DNS query.

This is the way that DNS resolution has been practiced for decades, and it's also one of the reasons for the recent interest in protecting information on the resolver-to-authoritative exchange: The full domain name is more information than all but the last name server needs to know.

One such minimization technique, known as qname minimization, was identified by Verisign researchers in 2011 and documented in RFC 7816 in 2016. (In 2015, Verisign announced a royalty-free license to its qname minimization patents.)

With qname minimization, instead of sending the full domain name to each name server, the resolver sends only as much as the name server needs either to answer the query or to refer the resolver to a name server at the next level. This follows the principle of minimum disclosure: the resolver sends only as much information as the name server needs to "do its job." As Matt Thomas described in his recent blog post on the topic, nearly half of all .com and .net queries received by Verisign's .com TLD servers were in a minimized form as of August 2020.

Additional Minimization Techniques

Other techniques that are part of this new chapter in DNS protocol evolution include NXDOMAIN cut processing [RFC 8020] and aggressive DNSSEC caching [RFC 8198]. Both leverage information present in the DNS to reduce the amount and sensitivity of DNS information exchanged with authoritative name servers. In aggressive DNSSEC caching, for example, the resolver analyzes NSEC and NSEC3 range proofs obtained in response to previous queries to determine on its own whether a domain name doesn't exist. This means that the resolver doesn't always have to ask the authoritative server system about a domain name it hasn't seen before.

All of these techniques, as well as additional minimization alternatives I haven't mentioned, have one important common characteristic: they only change how the resolver operates during the resolver-authoritative exchange. They have no impact on the authoritative name server or on other parties during the exchange itself. They thereby mitigate disclosure risk while also minimizing operational risk.

The resolver's exchanges with authoritative name servers, prior to minimization, were already relatively less sensitive because they represented aggregate interests of the resolver's many clients1. Minimization techniques lower the sensitivity even further at the root and TLD levels: the resolver sends only its aggregate interests in TLDs to root servers, and only its interests in SLDs to TLD servers. The resolver still sends the aggregate interests in full domain names at the SLD level and below2, and may also include certain client-related information at these levels, such as the client-subnet extension. The lower levels therefore may have different protection objectives than the upper levels.


Minimization techniques and encryption together give DNS designers additional tools for protecting DNS information — tools that when deployed carefully can balance between cryptographic and operational perspectives.

These tools complement those I've described in previous posts in this series. Some have already been deployed at scale, such as a DNSSEC with its NSEC and NSEC3 non-existence proofs. Others are at various earlier stages, like NSEC5 and tokenized queries, and still others contemplate "post-quantum" scenarios and how to address them. (And there are yet other tools that I haven't covered in this series, such as authenticated resolution and adaptive resolution.)

Modern cryptography is just about as old as the DNS. Both have matured since their introduction in the late 1970s and early 1980s respectively. Both bring fundamental capabilities to our connected world. Both continue to evolve to support new applications and to meet new security objectives. While they've often moved forward separately, as this blog series has shown, there are also opportunities for them to advance together. I look forward to sharing more insights from Verisign's research in future blog posts.

Read the previous posts in this six-part blog series:

  1. The Domain Name System: A Cryptographer's Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys

  1. This argument obviously holds more weight for large resolvers than for small ones — and doesn't apply for the less common case of individual clients running their own resolvers. However, small resolvers and individual clients seeking additional protection retain the option of sending sensitive queries through a large, trusted resolver, or through a privacy-enhancing proxy. The focus in our discussion is primarily on large resolvers. 
  2. In namespaces where domain names are registered at the SLD level, i.e., under an effective TLD, the statements in this note about "root and TLD" and "SLD level and below" should be "root through effective TLD" and "below effective TLD level." For simplicity, I've placed the "zone cut" between TLD and SLD in this note. 

Written by Dr. Burt Kaliski Jr., Senior VP and Chief Technology Officer at Verisign | 27-Jan-2021 22:19

Post-Riot Domain Registration Trends: Findings From Tracking Trump-Related Domains and Subdomains

The U.S. Capitol riot on 6 January 2021 was an unexpected event following the 2020 U.S. elections. The incident also made headlines worldwide, prompting us to track the registration trend for Trump-related domains and subdomains. We also looked into two domains for Trump's e-commerce stores that Shopify shut down.

Trump's Online Stores Shut Down

Some entities have planned to withdraw their business dealings with Trump's organizations. For example, Shopify announced on 7 January 2021 that it shut down two e-commerce sites owned by the Trump Organization — trumpstore[.]com and shop[.]donaldjtrump[.]com. Indeed, visiting shop[.]donaldjtrump[.]com on 19 January 2021 still results in an invalid request error.

However, the domain trumpstore[.]com is already up and running since 18 January 2021. It was down on 7 — 14 January 2021 but was redirected to trump[.]com/trump-store, according to snapshots taken by the Wayback Machine.

The website's recovery happened after its WHOIS records were modified on 17 January 2021, as revealed by WHOIS History Search. Specifically, the registrant's contact organization was changed from The Trump Organization to DTTM Operations LLC. The modification was also detected the next day by the Registrant Monitor of the Domain Research Suite (DRS) when we started monitoring DTTM Operations LLC.

The Trend for Trump-Related Domain Names

We observed the registration trend for domains related to Donald Trump during the past two weeks. Specifically, these are the types of domains included in the study:

  • Typosquatting domain names: We downloaded the weekly typosquatting data feed dated 4 — 10 and 11 — 17 January 2021. We then counted the number of domain names that contain the string "trump."
  • Subdomains: We also retrieved all subdomains containing the string "trump" that were added to the Domain Name System (DNS) on 6 January 2021.
Typosquatting Domains

The domains the Typosquatting Data Feed picks up include those bulk-registered along with other similar-looking domains. Bulk domain registrations of domain names containing the string "trump" started to dwindle a week or two after the U.S. elections held on 3 November 2020. However, it peaked again during the week of the Capitol riot (i.e., week ending 10 January 2021).

None of the Trump-related domains registered on the weeks ending on 10 and 17 January 2021 were publicly registered under the Trump Organization or DTTM Operations LLC. Some examples of the domains are:

  • bringbacktrump[.]org
  • bringbacktrump[.]shop
  • bringbacktrump[.]store
  • donaldtrump[.]consulting
  • donaldtrump[.]expert
  • donaldtrump[.]win
  • donaldtrump[.]world
  • lettrumprun[.]com
  • lettrumprun[.]org
  • lettrumprun[.]shop
  • trumpinsurection[.]com
  • trumpinsurrection[.]org
  • trumpinsurrection[.]xyz
  • trumpintwitter[.]com
  • trumpistwitter[.]com

Subdomains Lookup returned 247 subdomains containing the string "trump" that made their way into the DNS starting 6 January 2021. These subdomains were related to 74 domain names, all of which could not be attributed to the Trump Organization or DTTM Operations LLC based on Bulk WHOIS Lookup results. Around 62% of the domains, in fact, had redacted or privacy-protected WHOIS records.

The Trump-related subdomains include these examples:

  • trumpwon[.]deemerge[.]com
  • onlytrumps[.]ibleed[.]net
  • darthtrump[.]thelandofmethandhoney[.]com
  • telltrump[.]dacanesurfshop[.]com
  • thetrumphub[.]trumpsden[.]com
  • trumpybot2[.]repl[.]co
  • americantrumpcards[.]landandair[.]video
  • blog[.]trumpvsbiden[.]adss[.]com
  • cpcontacts[.]trump2[.]torweb[.]site

From the analysis above, it is possible that changes made to the WHOIS record of trumpstore[.]com indicate the organization's response to Shopify's shutdown. It would also not be surprising that more domains under the Trump Organization end up moved to DTTM Operations LLC. Additionally, the increase in the number of Trump-related typosquatting domains and subdomains that could not be attributed to Trump's organizations could also hint at domainers or even threat actors riding the newsworthy tide. | 27-Jan-2021 19:15

Alphabet to Shut down Loon, its Balloon Based Internet Access Project

Photo: X - The Moonshot Factory

Despite several groundbreaking technical achievements over the past nine years, Google's parent company Alphabet has decided to end the Loon project. The company said the road to commercial viability has proven much longer and riskier than hoped.

While Loon remained an experimental project for many years, last year it started to move into more practical operations. In July 2020, Loon announced it had begun providing commercial service in parts of Kenya, using about 35 balloons to offer service over a region of nearly 50,000 square kilometers.

Astro Teller, head of X, the advanced projects or "moonshot factory" division of Alphabet, says some of Loon's technology will live on in its other projects such as Taara. "This team is currently working with partners in Sub-Saharan Africa to bring affordable, high-speed internet to unconnected and under-connected communities starting in Kenya." | 26-Jan-2021 21:35

Nominations Open for Public Interest Registry (PIR) Board of Directors

Would you be interested in helping guide the future of the Public Interest Registry (PIR), the non-profit operator of the .ORG, .NGO and .ONG domains? Or do you know of someone who would be a good candidate? If so, the Internet Society is seeking nominations for four positions on the PIR Board of Directors. The nomination deadline is Monday, February 16, 2021, at 18:00 UTC.

More information about the positions and the required qualifications can be found at:

As noted on that page:

The Internet Society is now accepting nominations for the Board of Directors of the Public Interest Registry (PIR). PIR's business is to manage the international registry of .org, .ngo, and .ong domain names, as well as associated Internationalized Domain Names (IDNs).

In 2021 there are four positions opening on the PIR Board. The appointed directors will serve staggered terms, with half appointed to two year terms and half to three year terms, with terms beginning mid-year in 2021.

As Internet Society Trustee Ted Hardie wrote in a post, prior board or senior executive experience is preferred. All directors must have an appreciation for PIR's Mission and the potential impact of PIR decisions on the customers of PIR and the global community served by .ORG and the other TLDs PIR operates. Directors must be able to read and understand a balance sheet, as well as read and communicate effectively in the English language.

If you are interested in being considered as a candidate, or know of someone who should be considered, please see the form to submit near the bottom of the nomination info page.

Written by Dan York, Author and Speaker on Internet technologies - and on staff of Internet Society | 26-Jan-2021 00:27

What Will 2021 Have Install for the ICT Industry?

While 2021 will remain a year with lots of uncertainties, at the same time, we can say that the pandemic has not affected the information and communications technology (ICT) industry in any significant way. Yes, there has been a slowdown, for example, in the sale of smartphones. Shortages in both materials and expertise are slowing fiber deployment, and the recovery over 2021 will be slow and uncertain because of the many lockdowns and travel restrictions.

However, in general, the pandemic has propelled the industry to the forefront of policies and strategies aimed at limiting the negative effects of the pandemic. We have been able to work and study from home, be it that for many people this has been far from ideal, but also for those people it has been better than nothing. On the other hand, many people and businesses have tasted the positives of working from home, and there is no doubt it will become a permanent feature in many organizations.

The same applies to healthcare services; imagine the social and economic consequences if we did not have the technologies that are available to us nowadays.

The reality of 2021 is that, to a large extent, this situation will continue. It will take a year to have sufficient people vaccinated to return to a more normal life. At the same time, during this period, the above-mentioned services will be further developed and will increasingly become part of the overall traditional way of work, education, healthcare, shopping and so on.

Going forward, we will see more self-serve and no-touch options. This will have a positive effect on the development of smart homes, buildings, and cities. However, we also need to address job losses because of it. It might take a decade or more, but the road to some sort of a universal income is inevitable.

A range of new businesses has been set up that is now solely based on online services and especially around video-based services such as Zoom. These services are set to stay with us as there will be more demand for them, with or without a pandemic. Expect lots of new innovations in these video-based services.

Businesses and government institutions will have to incorporate online activities to build resilience in the economy and society to confront further outbreaks or other disruption, such as climate change.

NBN – critical infrastructure, underpinning the countries resilience

The NBN will remain the essential infrastructure needed to underpin the economic and social resilience of the country. The upgrade towards FttH will start in 2021, and the current weak spots will be weeded out of the network. It is unlikely that the Government, therefore, will any time soon consider the privatization of the NBN or any other major change to its NBN policy.

This will mean more pain for the industry as the Government will rely on NBN Co to fund the upgrades themselves. As a result, there will be an ongoing margin squeeze for retail service providers (RSPs). NBN Co has an infrastructure monopoly, and the RSPs do have no other way than to accept the prices that NBN Co needs to recoup its overpriced NBN infrastructure.

The only bright spot here is that the ACCC has indicated that they will become more involved in the price setting of the wholesale services. However, the Minister for Communications seems to be willing to intervene to ensure that NBN Co is not getting too many obstacles. So, there is a potential clash of interest to be expected.

5G essential OPEX tool for operators

5G will remain in the spotlight, and the hype around this technology will continue. There is still nowhere in the world a clear business model for 5G other than it will over time replace 4G because 5G is more efficient and is saving the industry operational costs. The mobile companies need 5G to stay competitive. It will be interesting to see if more solid business models will be developed for the promising internet of things (IoT) services beyond niche market applications.

What will be interesting is that 5G could lead to a disaggregation of the mobile operators. Telstra has already indicated moves in that direction, with plans to split fixed infrastructure, mobile towers, and services into separate companies. This could, over time, lead to significant changes to the makeup of the landscape of the industry. More freedom would be provided for companies who do want to deploy 5G for their internal use, especially when it starts to become clearer if and when and in particular what sort of IoT services are becoming available.

New kid on the block – Wi-Fi 6

An interesting development is also the arrival of Wi-Fi 6. This is a new upgrade to Wi-Fi. The promise is to turn Wi-Fi from a two-lane highway to an eight-lane highway. It is heralded as the biggest upgrade to Wi-Fi in 20 years, and connections should be faster and a lot more reliable because of it.

The first wave of certified Wi-Fi 6 products (based on the 802.11ax standard) is set to enter the market in the coming months. The new technology will be embedded in phones, PCs and laptops. Wi-Fi 6-supported TVs and VR devices are expected to arrive by the middle of the year.

Further predictions include:

  • globally, more mobile mergers are to be expected;
  • overall, the global telecoms industry will continue to decline (between 1-3 percent) — it might recoup some of its 2020 losses during 2021;
  • fixed broadband will always win from mobile broadband — there are short term and niche market opportunities in areas where there is (still) only poor fixed broadband available;
  • the IT side of the industry will continue to see significant growth, especially in cloud computing, data analytics, data centers;
  • OTT entertainment such as Netflix, Stan, YouTube and Disney Plus will continue to increase, while traditional pay-TV will continue to decline;
  • telehealth and tele-education will receive more attention from the professionals in these industries and the services will become more sophisticated;
  • regulations will be implemented to address fake news, disinformation and the misuse of social media;
  • on the negative side, we might see one or two major cyber-attacks (such as loss of power/internet to a major city as well as a continuation of a range of smaller attacks); and
  • China will increase its share in the global technology industry and will become the technical leader in many areas.

Written by Paul Budde, Managing Director of Paul Budde Communication | 25-Jan-2021 22:48

The New .AU Domain Licensing Rules and Their Impact

The Australian domain registry, auDA, has now confirmed their new licensing rules will go into effect on April 12, 2021. The registry has been working on this change for quite some time in preparation for the anticipated launch of their top-level domain (TLD), .AU. These rules will apply to new registrations and around three million existing domain names in the COM.AU, NET.AU, ORG.AU, and other .AU namespaces.

As previously indicated, the new rules are not likely to impact the majority of existing registrants. While the commencement date for the new rules is still a while away, there are some things to do now in preparation for the changes.

What's changing?

If you have used an Australian business number (ABN) or company number (ACN) as your eligibility, you will not be impacted as long as the ABN or ACN is valid.

Until now, to register a domain under the .AU top-level domain extension, an Australian trademark application, or registration could be used to satisfy the required Australian presence. The domain name did not need to match the trademark.

Under the new rules, a domain name registration based on an Australian trademark must exactlymatch the trademark. That means the domain name must be identical to and in the same order as the words that appear on the Australian trademark application or registration, excluding domain name system (DNS) identifiers such as COM.AU, punctuation marks, articles such as "a," "the," "and," "of," and "&."

Trademark exampleExample of an exact match domain nameTweedledee & Frog Prince!

What's the impact?

Non-compliant domain names must have their errors corrected before they come up for renewal any time after the new rules are implemented.? Failing to comply could mean that auDA or the managing registrar could suspend or cancel the non-compliant domain. Once a domain name is cancelled, it may not be transferred or renewed. It will be purged from the registry records and made available for registration by the general public.

Cleaning up your .AU domain portfolio not only makes you compliant with the new rules, but enables you to be eligible to participate in the launch of the top-level .AU domain that is widely expected to take place in the second half of 2021.

What to look out for

As a rule of thumb for the domains you want to maintain, all domain registrants are obligated to keep their domain information complete, true, and accurate throughout the lifetime of their domain names. Similarly, all .AU domain registrants are continually required to have a valid Australian presence and satisfy any eligibility and allocation criteria for the namespace being applied for.

Registrant contacts and entity information can change over time and may not have been applied to domain portfolios. The licensing rules change presents a great opportunity for domain registrants to ensure their domain information is up-to-date.

CSC will run an audit of all .AU domain names and then work with domain holders to update the ownership of non-compliant .AU domain names to the correct trademark details or local entity details.

The following domain information should be reviewed thoroughly:

  1. Trademark information used, including foreign companies holding a COM.AU or NET.AU domain where a trademark right has been used for eligibility AND the domain name is NOT an exact match of the words subject to the Australian trademark application or registration. Pending trademarks will also be accepted as a basis of eligibility.
  2. The business registration number (ABN or ACN for example) where applicable.
  3. Registrant contact information.
  4. Technical and administrator contact information.
  5. All other WHOIS details and data in registry records.
What are my options if I don't have a local entity or TM?

If a domain does not meet the current eligibility requirements, you will not be able to renew it at the time of expiration. If you do not have a local Australian entity or an Australian trademark matching the domain, you have a few options.

  1. Apply for a new trademark to match the domain.
  2. Register a local business; more information can be found here.
  3. Lapse the domain.

Before deciding that it might be okay to purge a few domains, be cautious that lapsed or abandoned domain names carry a footprint of digital activity that can be leveraged as an attack vector or cause disruption to a virtual private network (VPN), voice-over IP (VoIP), website, services, servers, network or email, and a host of other dependencies.

What is CSC doing?

In preparation for the changes, CSC has initiated an audit and will notify our clients about any domains that require an update.

If you have domains with other registrars that you would like assistance with, or would like to register names under the upcoming .AU top-level domain launch, please reach out to your CSC client service partner or contact us | 25-Jan-2021 22:14

Is Starlink the Tesla of Broadband Access? I Have a Chance to Find Out

Starlink is satellite internet access from SpaceX, one of Elon Musk's other companies. If it lives up to its hype, it will cure the problem of broadband availability in rural areas, although affordability will still be an issue.

Most satellite-based Internet access sucks (that's a technical term). If based on geostationary satellites (ones you can point a dish at), the distance to the satellite is so great that the round-trip time for data is forever; this problem is called latency. High latency doesn't matter much if you're uploading or downloading files; it's incredibly annoying if you're web surfing; and pretty much unusable for VoIP and especially for Skyping and Zooming. Technical details at Satellite Broadband Access — OK If You Have To.

Services like Iridium use LEOS (Low Earth Orbit Satellites), so they don't have a latency problem, but, for technical reasons, they have speeds that you thought you left behind when you stopped doing dialup — and they're very expensive to boot. Way better than nothing if you're in the middle of the ocean and need to see a weather forecast or send an SOS but not a reasonable alternative for home or office use.

Starlink also uses LEOS but has much greater bandwidth than any other low-orbit service, at least partly because SpaceX has used its rockets to launch swarms of tiny satellites. And, according to an email I just got today (an Inauguration Day present?), Starlink is now available in limited supply in my service area (North Central Vermont).

"During beta users can expect to see data speeds vary from 50Mb/s to 150Mb/s and latency from 20ms to 40ms in most locations over the next several months as we enhance the Starlink system. There will also be brief periods of no connectivity at all.

As we launch more satellites, install more ground stations and improve our networking software, data speed, latency and uptime will improve dramatically. For latency, we expect to achieve 16ms to 19ms by summer 2021."

A latency of 40ms is acceptable for almost all uses except very high-speed gaming and stock trading. 16-19ms (milliseconds) is what you'd expect from cable.

The speed is considerably less than the GB/s service advertised by some fiber providers. Most of us don't need anywhere near that speed today. But uses will be found for it (3D conferencing with avatars?) because it exists. It remains to be seen if Starlink can scale to these speeds.

Reliability should be better than cable or even service from a wireless ISP. So long as you have electricity from some source, you're not going to lose your Internet access because of a storm or other local emergency.

So what's the rub? Price, at least for now.

To take part in the Beta, I had to buy a $499 dish and other equipment (actually $581.94 with tax and shipping) and agree to pay $99/month plus taxes for service. However, that's less than the price of most smartphones and not much more than cellphone monthly charges. There is no contract, and there is a 30-day no-fault money-back guarantee on the equipment. I expect there will be higher and lower prices available for different service tiers and that competition will bring the equipment cost down. Richard Branson is also launching tiny satellites, although they have no service based on them yet.

If this all works and service is available nationwide, there should be no reason why any child in rural areas can't go to school online or why any of us can't benefit from telemedicine. Affordability is a problem we can afford to fix — not by subsidizing SpaceX and eventual competitors but with direct aid to low-income households. Many users will have offsetting savings from canceling their old-fashioned phone service and canceling satellite TV since streaming video will rock at these speeds.

Rural economies are already benefitting from urban-flight — at least those rural areas which have decent broadband. The cost of this service is miniscule if you're already buying a house in Vermont to work from. If you're an early adopter moving to a rural area, you'll save lots of money overall because houses are cheaper where there is no good broadband today. Welcome to Vermont!

If Spacelink pans out (and it's still an if), it will be a greater contribution to the common good than Teslas. I'll let you know how the Beta goes.

You can find out if you can be part of the beta by clicking CHECK AVAILABILITY.

Written by Tom Evslin, Nerd, Author, Inventor | 22-Jan-2021 22:59

The Netizen's Guide To Reboot The Root (Part I)

Rampant dysfunction currently plagues the Internet's root zone where a predatory monopolist has captured ICANN and is bullying stakeholders. This harms the public interest and must be addressed — here's how.

Introduction: How To Save The Internet In 3 Simple Steps

In the world of ICANN and Internet policy, complexity is manufactured to create an illusion that issues are impenetrably technical such that normal and everyday principles can't apply. This causes a pervasive and entrenched phenomenon of eyes that glaze over at the mere mention of the word "ICANN" — including those of government regulators and other officials that might otherwise take more of an active interest. Thus, only rarely does anyone attempt untangling one of the messy issues impacting Domain Name System (DNS) governance that often resemble a Gordian knot. Instead, most take a leap of faith by entrusting matters to ICANN along with its contracted parties and other stakeholders before finding something else to do.

Untangling these Gordian knots — and there are more than one — requires an accurate understanding of what is going on in an interconnected web of intrigue with many moving parts and different players. Today, the Internet's root zone, like a failed state, is dominated by predatory corporate warlords intent on maximizing control of Internet infrastructure that they don't own but which they feel entitled to. Their pursuit of this anticompetitive aim has thoroughly corrupted the entire DNS ecosystem and is normalized as the status quo.

Some argue for replacing ICANN with a new organization created by an amorphous "global community of stakeholders." However, this is dangerously naive at best because the exact same fate would likely befall any successor to ICANN if what causes the dysfunction isn't fixed first. A much darker view recognizes that, since the IANA transition, certain interests with their own agendas have loosely aligned around the common goal of a root zone that is beyond U.S. jurisdiction. This should raise questions, if not outright suspicions, about whether governance dysfunction is being purposely exacerbated towards a full-blown crisis that can be exploited to achieve the aim of these inimical interests.

The possibility that this theory is even partially true adds to the impetus for addressing the corruption, dysfunction, and capture that makes the Internet's root zone resemble a failed state. Doing so requires a reboot of the Internet's root that restores governance back within design parameters by properly implementing privatization while also resetting certain areas where non-standard deviations are causing harmful downstream effects. At a minimum, such a reboot must:

  1. Undo Amendment 35 of the NTIA-Verisign Cooperative Agreement;
  2. Delete Presumptive Renewal From Legacy Registry Agreements;
  3. Open the Internet's Largest Registry to Market Competition.

Necessarily, this is an ambitious agenda — so let's pop the hood and take a closer look at how to save the Internet in three simple steps.

Ctrl-Z: Undo Amendment 35 of the NTIA-Verisign Cooperative Agreement

The cooperative agreement — which was first signed by the U.S. government and Network Solutions, Verisign's predecessor-in-interest, in 1993 — is a foundational document of Internet governance and the legal instrument by which the U.S. government delegates key management functions of the Internet's DNS to the private sector. It has been amended thirty-five times, most recently in October 2018, when the U.S. Commerce Department exercised a unilateral renewal option while also approving transformative modifications to the agreement that singularly benefit Verisign at the expense of the public interest.

First and foremost, Amendment 35 removed an essential pricing safeguard that was implemented in 2012 and capped the maximum price that Verisign could charge for .com domain names at $7.85. This price restriction — which protected .com domain name registrants from arbitrary and excessive price increases by Verisign — resulted from the findings and recommendations of a 2012 empirical review of .com's market power conducted by the U.S. Justice Department.

Oddly, the Commerce Department's decision to remove the pricing safeguard wasn't based on updated empirical findings from the Justice Department, which hasn't conducted any formal competition review of .com since 2012. The absence of any updated review is conspicuous, particularly because it ignores at least one Congressional request for an updated review, dating from 2016, and also contradicts the Justice Department's own expectations regarding the cooperative agreement and assurances that were provided to Congress, also in 2016.

The consequences of the Commerce Department's decision to remove this essential consumer protection became clear in short order when ICANN disregarded an unprecedented outpouring of more than 9,000 public comments unanimously opposing any increase in .com pricing and, instead, sold pricing power to Verisign for $20 million.

The removal of such an essential consumer safeguard without any empirical basis for doing so harms the public interest. But Amendment 35 makes other changes to the cooperative agreement that fundamentally transform the nature of the relationship between the U.S. government and Verisign. The most problematic of these is that Amendment 35 makes any future amendment of the cooperative agreement — including any potential regulatory action — subject to mutual consent of both parties. This means that reinstating the pricing safeguard would require Verisign's consent, and the likelihood of this happening is between slim and none — and slim just left town.

The troubling terms of Amendment 35, however, aren't nearly as disturbing as the process by which it was allegedly approved. According to sources, former Secretary of Commerce Wilbur Ross sidelined the responsible agency, the National Telecommunications and Information Administration (NTIA), and personally directed the cooperative agreement renewal and Amendment 35.

On September 19, 2018, Secretary Ross met privately in his office with then-NTIA Administrator David Redl. During this brief meeting, Secretary Ross is alleged to have handed Redl a document containing the text of what later became Amendment 35 and instructed him to amend the cooperative agreement with the provided text without any further modification or review. Sources have further alleged that Secretary Ross received the document that was provided to Redl while attending a dinner function the prior evening at which other senior government officials, including the Secretary of State, were also present.

Although sources did not identify the person or persons that allegedly gave the document to Secretary Ross, publicly-available official records from the Commerce Department and NTIA confirm that the private meeting with Redl took place as described as well as the dinner function the night before. These allegations, along with the corroborating official records, suggest that improper political interference occurred that involved the highest levels of the Commerce Department, including the Secretary. If true, this would explain how Amendment 35 — truly a terribly awful deal — came to be approved. At a minimum, much greater scrutiny is needed that can shed light on the circumstances surrounding Amendment 35.

Notwithstanding the veracity of these allegations, the government should ask a federal judge to nullify Amendment 35 to the cooperative agreement with Verisign, pursuant to the Administrative Procedures Act. The terms of the amendment are anathema to the public interest and hamstring the government from effectively protecting consumers and the broader public interest. Also, Congress wasn't consulted, and the timing of the amendment — which was signed a month early at the end of October while Congress was in recess for the 2018 mid-term election — raises concerns about whether Congress was deliberately kept in the dark in order to avoid oversight.

The result is a deal that lopsidedly benefits Verisign at the expense of, literally, everyone else. Accordingly, the new Administration should prioritize the interests of DNS stakeholders, .com registrants, and fans of good government by hitting Ctrl-Z to undo Amendment 35 so that it can be replaced with an amendment that adheres to proper procedures, relies on a full and updated empirical review of the market power of .com, and, most importantly, benefits the public interest.

Stay tuned for Part 2 — Delete Presumptive Renewal From Legacy Registry Agreements — Coming Soon!

Written by Greg Thomas, Founder of The Viking Group LLC | 22-Jan-2021 22:36

Looking Back at the Broadband Industry in 2020

I periodically take a look at broadband trends into the future. But as I was thinking about how unique 2020 was for everybody, I realized that there were some events during the year that we're going to look back on a decade from now as important to the broadband industry. Interestingly, most of these events were not on anybody's radar at the beginning of the year.

Upload Broadband Entered the Picture

For the first time, we all started caring about upload speeds due to the pandemic. Millions of homes that thought they had good broadband suddenly found that the home broadband connection wasn't good enough for working or schooling. Millions of people reacted to this by upgrading to faster download broadband speeds, only to find in many cases that the upgrade still didn't fix the upload speed problems.

It also appears that a lot of people will continue to work from home after the end of the pandemic, which means that the demand for upload speeds is not going to go away. This will put a lot of pressure on cable companies in markets where there is a fiber competitor. Fiber ISPs only need to advertise as the work-from-home solution to snatch customers.

Charter Pursues Rural Broadband

Charter looks to be the only ISP out of the largest four adopting a strategy to expand to rural areas surrounding existing markets. Charter has been the fastest-growing ISP over the last few years, and it looks like the company wants to continue that growth.

I think the rural telcos will look back in a decade and realize they made a big mistake. The telcos have had repeated opportunities to upgrade broadband and dominate the rural markets, where they could have been a permanent monopoly. Instead, Charter is going to sweep through many markets and take most of the customers. Charter will be aided in this expansion by the $1.22 billion they snagged out of the recent RDOF grant.

Windstream Decides to Chase Fiber

If you go by what they're saying, Windstream is coming out of bankruptcy as a new company. The company has said recently that it intends to build fiber to cover at least half of its historic telephone serving areas. This will catch Windstream up to the smaller telcos that have largely migrated to fiber as the only chance for long term survival. Of course, this also means that half of Windstream's markets are largely going to be abandoned. Windstream customers have to be wondering which half they live in.

Satellite Broadband Goes into Beta

After years of being somewhat theoretical, Starlink has customers in beta tests that are loving the download broadband speeds between 50 Mbps and 150 Mbps. All of the satellite companies still have a long way to go in terms of launching sufficient satellites to become a viable competitor — but we now have proof on concept.

Rough Year for the Supply Chain

Like so many others, the telecom industry has mostly taken the supply chain for granted without much thought of where network components are manufactured. 2020 started with price pressure on electronics due to tariffs and went into a tailspin when the pandemic hit Wuhan Province in China, where the majority of laser technology is made.

Electronics vendors have spent much of 2020 developing new sources of manufacturing. This means a downside for the Chinese economy but an upside for many other places in the world. The new administration says it will fund an effort to move much of US chip manufacturing back to the US, and hopefully, other electronic components will follow. The big advantage that the far east has had over US manufacturing has been cheap labor, but modern and largely robotized factories might overcome that. Hopefully, telecom vendors will take the needed steps to make sure we aren't caught flat-footed again.

Written by Doug Dawson, President at CCG Consulting | 22-Jan-2021 22:28

The Legacy of the Pai FCC

As is normal with a change of administration, there are articles in the press discussing the likely legacy of the outgoing administration. Leading the pack in singing his own praises is former FCC Chairman Ajit Pai, who recently published this document listing a huge list of accomplishments of the FCC under his Chairmanship. Maybe it's just me, but it feels unseemly for a public servant to publish an official self-praise document. The list of accomplishments is so long, I honestly read it twice to make sure Chairman Pai wasn't taking credit for inventing 5G!

More disturbing to me are industry articles like this one that lists the primary achievements of the Pai FCC to include "the repeal of Title II regulation of the internet, rural broadband development, increased spectrum for 5G, decreasing waste in universal service funding, and better controlling robocalls." I see some of those as failures and not accomplishments.

I find it unconscionable that the regulatory agency that is in charge of arguably the most important industry in the country would deregulate that industry. The ISP industry is largely controlled by a handful of near-monopolies. It's easy to understand why the big ISPs don't want to be regulated — every monopoly in every industry would love to escape regulation. It's the government's and the FCC's role to protect the public against the worst abuses of monopolies. Lack of regulation means that carriers in the industry can no longer ask the FCC to settle disputes. It means that consumers have no place to seek redress from monopoly abuses. We're within sight of $100 basic broadband, while the FCC has washed its hands of any oversight of the industry. Killing Title II regulation comes pretty close in my mind to fiddling while Rome burns.

We saw the results of broadband deregulation at the start of the pandemic. Had the FCC not deregulated broadband, then chairman Pai could have directed ISPs on how they must treat the public during the pandemic. Instead, the FCC had to beg ISPs to voluntarily sign on to the 'Keep America Connected Pledge', which only lasted for a few months and which some of the big ISPs seemingly violated before the ink dried. During this broadband crisis, the FCC stood by powerless due to its own decision to deregulate broadband. This is downright shameful and not praiseworthy.

Everywhere I look, this FCC is getting praise for tackling the digital divide, and admittedly the FCC did some good things. There were some good winners of the CAF II reverse auction that will help rural households — but that was offset by awarding some of that grant to Viasat. The FCC did some good by increasing the Lifeline subsidy for tribal areas. But on the downside, the FCC decided to award a seventh year of CAF II subsidy of $2.4 billion to the big telcos — with zero obligations to use the money to expand broadband. The FCC knows full well that the original CAF II was mostly a sham and yet took no action in the last four years to investigate the failed program. The Pai FCC closed out its term by largely botching the RDOF grants.

The area where the FCC did the most good for rural broadband was making more wireless spectrum available for rural broadband. This FCC missed a few chances early, but in the last few years, the FCC nailed the issue. The FCC might have made the best long-term impact everywhere with the rulings on 6 GHz spectrum. Spectrum decisions might be the biggest lasting legacy of this FCC.

But we're never really going to know how this FCC did in narrowing the rural broadband gap because this FCC has no idea how many homes don't have broadband. The lousy FCC mapping was already a big issue when Chairman Pai took over the FCC. There was a lot of gnashing of teeth about the issue under Chairman Pai, but in four years, nothing was done to fix the problem, and if anything, the maps have gotten worse. It might not be so disturbing if the bad mapping was nothing more than lousy data — but the bad data has been used to justify bad policy and, even worse, has been used to determine where federal grants should be awarded.

To add salt to the wound, the FCC issues a mandated report to Congress every year that reports on the state of broadband. The reports from the Pai FCC are so full of imaginary numbers that they are closer to fiction than fact. About the most the FCC under Chairman Pai can say is that the imaginary number of people without broadband grew smaller under his watch. On the last day as Chairman, the FCC released the latest report to Congress that concludes incorrectly that broadband is being deployed to Americans "on a reasonable and timely basis." This recent report also concludes yet again that 25/3 Mbps is still a reasonable definition of broadband — when homes with that speed were unable to function during the pandemic.

In looking back, it's clear that this FCC tilted as far as possible in favor of the big ISPs. There is nothing wrong with regulators who work to strengthen the industry they regulate. But regulators also have a mandate to protect the public from monopolies abuses. The FCC seems to have forgotten that half of its mandate. If there is any one event that captures the essence of this FCC, it was when they voted to allow Frontier to bill customers for an extra year for equipment that customers own. I didn't see that accomplishment on Chairman Pai's list.

Written by Doug Dawson, President at CCG Consulting | 22-Jan-2021 22:10

Why the Internet is Not Like a Railroad

When one person transmits the speech of another, we have had three legal models, which I would characterize as Magazine, Bookstore, and Railroad.

The Magazine model makes the transmitting party a publisher who is entirely responsible for whatever the material says. The publisher selects and reviews all the material it published. If users contribute content such as letters to the editor, the publisher reviews them and decides which to publish. The publishing process usually involves some kind of broadcast, so that many copies of the material go to different people.

In general, if the material is defamatory, the publisher is responsible even if someone else wrote it. In New York Times Co. v. Sullivan, the court made a significant exception that defamation of public officials requires a plaintiff to show "actual malice" by the speaker, which makes successful defamation suits by public figures very rare.

The Bookstore model makes the transmitting party partly responsible for material. In the 1959 case Smith v. California, Smith was a Los Angeles bookstore owner convicted of selling an obscene book. The court found that the law was unconstitutional because it did not require that the owner had "scienter," knowledge of the book's contents, and that bookstore owners are not expected to know what is in every book they sell. (It waved away an objection that one could evade the law by claiming not to know what was in any book one sold, saying that it is not hard to tell whether one would be aware or not.) In practice, this has worked well, allowing bookstores and newsstands to sell material while still having to deal with defamatory or illegal material if told about it. Bookstores and newsstands engage in limited distribution, selling a variety of material but typically selling one copy of a book or magazine at a time to a customer.

The third is the Railroad model, or common carriage. Originally this applied to transport of people or goods, in which the carrier agrees to transport any person or any goods, providing the same service under the same terms to everyone. In the US, telephone companies are also common carriers, providing the same communication service to everyone under the same terms. As part of the deal, the carriers are generally not responsible for the contents of the packages or messages. If I send a box of illegal drugs or make an illegal, threatening phone call, I am responsible, but UPS or the phone company is not.

Common carriage has always been point to point or at most among a set of known points. A railroad takes a passenger or a box of goods from one point to another. A telephone company connects a call from one person to another, or at most to a set of other places determined in advance (a multipoint channel.) This is nothing like the publisher, which broadcasts a message to a potentially large set of people who usually do not know each other.

How does this apply to the Internet? Back in 1991 in Cubby vs. Compuserve, a case where a person was defamed by material hosted at Compuserve, a federal court applied the bookstore standard, citing the Smith case as a model. Unfortunately, shortly after that in Stratton Oakmont vs. Prodigy, a New York state court misread Cubby and decided that an online service must either be a publisher or a common carrier and since Prodigy moderated its forum posts, it was a publisher.

In response, Congress passed section 230, which in effect provided the railroad level of immunity without otherwise making providers act like common carriers.

There are not many situations where one party broadcasts other people's material without going through a publisher's editorial process. The only one I can think of is public access cable channels, which unsurprisingly have a contentious history, mostly of people using them to broadcast bad pornography. The case law is thin, but the most relevant case is Manhattan Community Access Corp. v. Halleck, where the Supreme Court ruled 5-4 that even though a New York City public access channel was franchised by the state, it was run by a private entity so the First Amendment didn't apply. These channels are not a great analogy to social networks because they have a limited scope of one city or cable system, and users need to sign up, so it is always clear who is responsible for the content.

Hence Section 230 creates a legal chimera, splicing some common carrier liability treatment on a broad range of providers who are otherwise nothing like common carriers. This is a very peculiar situation and perhaps one reason why Section 230 is so widely misunderstood.

Does this mean that the current situation is the best possible outcome? To put it mildly, a lot of people don't think so. Even disregarding those who have no idea what Section 230 actually does (e.g., imagining that without 230, their Twitter posts would never be deleted), there are some reasonable options.

The magazine model, treating every platform as a publisher, won't work for reasons that I hope are obvious--the amount of user-contributed material, even on small sites, is far more than any group of humans could possibly review. (On my own server, I host a bunch of web sites for friends and relatives, and even that would be impossibly risky if I were potentially liable for any dumb thing they or their commenters might say.)

The bookstore model, on the other hand, worked when the Cubby case applied it to Compuserve, and it could work now. Sites are immune for the material they haven't looked at or been told about, but they have to do something when notified. Getting the details right is important. The copyright notice and takedown rules of the DMCA sort of work but are widely abused by people sending bogus complaints in the (often correct) hope that sites will take the material down without reviewing the complaint or allowing the party that posted the material to respond. There has to be a reasonable balance between what is a notice and what is a reasonable response, but that doesn't seem impossible to figure out.

Written by John Levine, Author, Consultant & Speaker | 22-Jan-2021 20:41

Blind Eagle Targeted Attack: Using Threat Intelligence Tools for IoC Analysis and Expansion

Blind Eagle is a South American threat actor group believed to be behind APT-C-36 and that has been active since at least 2018. It primarily targets Colombian government institutions and large corporations in the financial, petroleum, and professional manufacturing industries.

Over time, researchers from QiAnXin Threat Intelligence Center have accumulated a list of the threat's indicators of compromise (IoCs), spoofed and affected organizations, and malicious attachment and malware MD5 hashes that would serve potential targets well. This list includes:

  • 13 spoofed companies and government institutions
  • Nine affected organizations
  • 28 malicious document MD5 hashes
  • 62 Trojan MD5 hashes
  • Six malicious domains
  • Eight malicious URLs
  • Nine RAR archive passwords

This expanded analysis will, however, only focus on the malicious domain IoCs. We used two threat intelligence tools to add yet unpublished artifacts that may be of interest.

Expanding the IoC List with Threat Intelligence Tools 2 Additional IP Address Artifacts

We began by subjecting the six domains from the original IoC list to a bulk WHOIS lookup to see if we can obtain registrant email addresses, names, or organizations. But we did not get any other information as all of the domains' WHOIS records were redacted for privacy. The Blind Eagle threat actors did their due diligence to not reveal any personally identifiable information (PII) that way.

We then put the six domains from the original list through reverse Domain Name System (DNS) searches. DNS Lookup API gave us the following related IP addresses:

Domains from QiAnXin Threat Intelligence CenterIP Addresses from DNS Lookup APIdiangovcomuiscia[.]com154[.]88[.]101[.]205linkpc[.]net67[.]214[.]175[.]69publicvm[.]com67[.]214[.]175[.]69

Interestingly, two of the domains resolved to the same IP address — 67[.]214[.]175[.]69, which was dubbed "malicious" by six engines on VirusTotal for being a malware host.

The IP address was also tagged "malicious" on AbuseIPDB after being reported 442 times for reasons that include:

  • Secure Shell (SSH) brute-forcing
  • Port scanning
  • Relations to a web app attack
  • Bot activity
  • Web/Email spamming
  • Distributed denial-of-service (DDoS) attack
  • Hacking
  • File Transfer Protocol (FTP) brute-forcing
  • Phishing
  • Voice over Internet Protocol (VoIP) fraud
  • Open proxy hacking
  • Using a Virtual Private Network (VPN)-protected IP address
  • SQL injection
  • IP spoofing
  • Host compromise
  • Internet of Things (IoT) device hacking

8 Additional Domain Artifacts

After obtaining the IP addresses the domains in the original list resolved to, we subjected them to reverse IP/DNS searches. Reverse IP/DNS Lookup gave us the following list of connected domains:

IP AddressesNumber of Connected DomainsDomains154[.]88[.]101[.]2054diangovcomuiscia[.]comeapoch[.]comgo-aheadwebshop[.]comwww[.]go-aheadwebshop[.]com67[.]214[.]175[.]694box6[.]dnsexit[.]comlinkpc[.]netpublicvm[.]comthinkvm[.]com

Out of the eight domains we got from the reverse IP/DNS searches, it's worth noting that:

The domains diangovcomuiscia[.]com and publicvm[.]com that resolved to one of the IP addresses each are also part of QiAnXin Threat Intelligence Center's IoC list.

The domains eapoch[.]com, go-aheadwebshop[.]com, dnsexit[.]com, linkpc[.]net, and thinkvm[.]com, meanwhile, are additional threat artifacts. Of these five domain names, linkpc[.]net is dubbed "malicious" on VirusTotal. A search on Screenshot Lookup told us that the domain is not even in use by a website. It is, if the resulting page is to be believed, available for any interested party's use free of charge. That may just be a clever ruse to trick people into clicking a likely malicious link embedded on the webpage, though this statement would require further investigation. Given that, it may be considered safe to block access to it from an organizations' networks.

And even if the other four additional domains are benign, it may also considered best to block network access to them as well since they share IP addresses with confirmed APT-C-36 IoCs.

To stay truly protected from Blind Eagle and APT-C-36, it is advisable to subject publicized IoCs to further research and analysis using domain and IP intelligence tools. As this short study showed, at least one more IP address (i.e., 67[.]214[.]175[.]69) and one additional domain (i.e., linkpc[.]net) should probably be included in company blacklists. | 22-Jan-2021 20:07

Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys

This is the fifth in a multi-part series on cryptography and the Domain Name System (DNS).

In my last article, I described efforts underway to standardize new cryptographic algorithms that are designed to be less vulnerable to potential future advances in quantum computing. I also reviewed operational challenges to be considered when adding new algorithms to the DNS Security Extensions (DNSSEC).

In this post, I'll look at hash-based signatures, a family of post-quantum algorithms that could be a good match for DNSSEC from the perspective of infrastructure stability.

I'll also describe Verisign Labs research into a new concept called synthesized zone signing keys that could mitigate the impact of the large signature size for hash-based signatures, while still maintaining this family's protections against quantum computing.

(Caveat: The concepts reviewed in this post are part of Verisign's long-term research program and do not necessarily represent Verisign's plans or positions on new products or services. Concepts developed in our research program may be subject to U.S. and/or international patents and/or patent applications.)

A Stable Algorithm Rollover

The DNS community's root key signing key (KSK) rollover illustrates how complicated a change to DNSSEC infrastructure can be. Although successfully accomplished, this change was delayed by ICANN to ensure that enough resolvers had the public key required to validate signatures generated with the new root KSK private key.

Now imagine the complications if the DNS community also had to ensure that enough resolvers not only had a new key but also had a brand-new algorithm.

Imagine further what might happen if a weakness in this new algorithm were to be found after it was deployed. While there are procedures for emergency key rollovers, emergency algorithm rollovers would be more complicated, and perhaps controversial as well if a clear successor algorithm were not available.

I'm not suggesting that any of the post-quantum algorithms that might be standardized by NIST will be found to have a weakness. But confidence in cryptographic algorithms can be gained and lost over many years, sometimes decades.

From the perspective of infrastructure stability, therefore, it may make sense for DNSSEC to have a backup post-quantum algorithm built in from the start — one for which cryptographers already have significant confidence and experience. This algorithm might not be as efficient as other candidates, but there is less of a chance that it would ever need to be changed. This means that the more efficient candidates could be deployed in DNSSEC with the confidence that they have a stable fallback. It's also important to keep in mind that the prospect of quantum computing is not the only reason system developers need to be considering new algorithms from time to time. As public-key cryptography pioneer Martin Hellman wisely cautioned, new classical (non-quantum) attacks could also emerge, whether or not a quantum computer is realized.

Hash-Based Signatures

The 1970s were a foundational time for public-key cryptography, producing not only the RSA algorithm and the Diffie-Hellman algorithm (which also provided the basic model for elliptic curve cryptography), but also hash-based signatures, invented in 1979 by another public-key cryptography founder, Ralph Merkle.

Hash-based signatures are interesting because their security depends only on the security of an underlying hash function.

It turns out that hash functions, as a concept, hold up very well against quantum computing advances — much better than currently established public-key algorithms do.

This means that Merkle's hash-based signatures, now more than 40 years old, can rightly be considered the oldest post-quantum digital signature algorithm.

If it turns out that an individual hash function doesn't hold up — whether against a quantum computer or a classical computer — then the hash function itself can be replaced, as cryptographers have been doing for years. That will likely be easier than changing to an entirely different post-quantum algorithm, especially one that involves very different concepts.

The conceptual stability of hash-based signatures is a reason that interoperable specifications are already being developed for variants of Merkle's original algorithm. Two approaches are described in RFC 8391, "XMSS: eXtended Merkle Signature Scheme" and RFC 8554, "Leighton-Micali Hash-Based Signatures." Another approach, SPHINCS+, is an alternate in NIST's post-quantum project.

Figure 1. Conventional DNSSEC signatures. DNS records are signed with the ZSK private key, and are thereby "chained" to the ZSK public key. The digital signatures may be hash-based signatures.

Hash-based signatures can potentially be applied to any part of the DNSSEC trust chain. For example, in Figure 1, the DNS record sets can be signed with a zone signing key (ZSK) that employs a hash-based signature algorithm.

The main challenge with hash-based signatures is that the signature size is large, on the order of tens or even hundreds of thousands of bits. This is perhaps why they haven't seen significant adoption in security protocols over the past four decades.

Synthesizing ZSKs with Merkle Trees

Verisign Labs has been exploring how to mitigate the size impact of hash-based signatures on DNSSEC, while still basing security on hash functions only in the interest of stable post-quantum protections.

One of the ideas we've come up with uses another of Merkle's foundational contributions: Merkle trees.

Merkle trees authenticate multiple records by hashing them together in a tree structure. The records are the "leaves" of the tree. Pairs of leaves are hashed together to form a branch, then pairs of branches are hashed together to form a larger branch, and so on. The hash of the largest branches is the tree's "root." (This is a data-structure root, unrelated to the DNS root.)

Each individual leaf of a Merkle tree can be authenticated by retracing the "path" from the leaf to the root. The path consists of the hashes of each of the adjacent branches encountered along the way.

Authentication paths can be much shorter than typical hash-based signatures. For instance, with a tree depth of 20 and a 256-bit hash value, the authentication path for a leaf would only be 5,120 bits long, yet a single tree could authenticate more than a million leaves.

Figure 2. DNSSEC signatures following the synthesized ZSK approach proposed here. DNS records are hashed together into a Merkle tree. The root of the Merkle tree is published as the ZSK, and the authentication path through the Merkle tree is the record's signature.

Returning to the example above, suppose that instead of signing each DNS record set with a hash-based signature, each record set were considered a leaf of a Merkle tree. Suppose further that the root of this tree were to be published as the ZSK public key (see Figure 2). The authentication path to the leaf could then serve as the record set's signature.

The validation logic at a resolver would be the same as in ordinary DNSSEC:

  • The resolver would obtain the ZSK public key from a DNSKEY record set signed by the KSK.
  • The resolver would then validate the signature on the record set of interest with the ZSK public key.

The only difference on the resolver's side would be that signature validation would involve retracing the authentication path to the ZSK public key, rather than a conventional signature validation operation.

The ZSK public key produced by the Merkle tree approach would be a "synthesized" public key, in that it is obtained from the records being signed. This is noteworthy from a cryptographer's perspective, because the public key wouldn't have a corresponding private key, yet the DNS records would still, in effect, be "signed by the ZSK!"

Additional Design Considerations

In this type of DNSSEC implementation, the Merkle tree approach only applies to the ZSK level. Hash-based signatures would still be applied at the KSK level, although their overhead would now be "amortized" across all records in the zone.

In addition, each new ZSK would need to be signed "on demand," rather than in advance, as in current operational practice.

This leads to tradeoffs, such as how many changes to accumulate before constructing and publishing a new tree. Fewer changes and the tree will be available sooner. More changes and the tree will be larger, so the per-record overhead of the signatures at the KSK level will be lower.


My last few posts have discussed cryptographic techniques that could potentially be applied to the DNS in the long term — or that might not even be applied at all. In my next post, I'll return to more conventional subjects, and explain how Verisign sees cryptography fitting into the DNS today, as well as some important non-cryptographic techniques that are part of our vision for a secure, stable and resilient DNS.

Read the previous posts in this six-part blog series:

  1. The Domain Name System: A Cryptographer's Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon

Written by Dr. Burt Kaliski Jr., Senior VP and Chief Technology Officer at Verisign | 22-Jan-2021 19:47

RSS and Atom feeds and forum posts belong to their respective owners.