voip

News

i2Coalition and DNA Merger Creates North America's Largest Internet Infrastructure Advocacy Group

The Internet Infrastructure Coalition (i2Coalition), the leading voice for web hosting companies, data centers, domain registrars and registries, cloud infrastructure providers, managed services providers and related tech, and The Domain Name Association (DNA), a nonprofit global business association that represents the interests of the domain name industry, recently announced their intended merge. The combined association, which will operate under the name i2Coalition and maintain the i2Coalition's existing organizational and management structure, will be the largest Internet infrastructure advocacy group in North America.

Effective July 28, 2020, this strategic merge ensures the DNA's mission will be supported by i2Coalition's working groups and associated initiatives and will amplify both parties' voices to create the most complete representation of the Internet industry in North America. Included as part of this merge are plans to establish a DNA-branded working group. This ensures that the DNA's mission to protect and empower businesses and individuals with education and engagement that underscores the importance, benefits and opportunities of domain names will be strengthened.

Not only does this merge amplify members' reach and capabilities, but it also delivers access to increased combined resources and economies of scale. In turn, this allows the new i2Coalition to generate more impactful and far-reaching campaigns. These campaigns will ensure policy doesn't impede growth, knowledge and access to the Internet and its resources.

"The merger of our organizations underpins the mission of both the DNA and the i2Coalition, combining our mutual dedication to Internet industry best practices and policies to empower continued growth. Combined, we represent over 100 organization members and their online business interests," says Christian Dawson, Co-Founder of the i2Coalition. "Our commitment to the DNA's mission is at the core of this merger, and the priorities of both organizations remain as strong as ever. We look forward to going forth with the expanded capabilities and amplified voice that this newly formed collaborative provides."

The DNA is the first industry trade association representing the interests of the domain name industry. The group is vital for its work, helping consumers, businesses, public-benefit organizations and others understand and take advantage of the Internet name space. At the same time, i2Coalition — since its formal launch in 2012 — has worked with Internet infrastructure providers to advocate for sensible policies, design and reinforce best practices, create industry standards and build awareness of how the Internet works through an array of working groups.

"The mission of the DNA has always been to spread awareness, promote growth, offer resources and facilitate communication about innovation and value in the Internet domain name space," adds Statton Hammock, founding DNA Board member and current Board Secretary. "Our mission aligns well with that of the i2Coalition, and I look forward to remaining part of the new organization and to creating an even larger impact."

"Domain names are a key part of the growth of the Internet infrastructure, and the i2Coalition is excited to become an enabler for the great work the DNA is accomplishing in this sphere," comments Melinda Clem, Chairwoman for the i2Coalition. "We're excited to collaboratively foster a healthy domain environment with universal acceptance of non-traditional domains and provide access to expertise and resources that help address issues facing the domain name industry."

To learn more about the Domain Name Association, please visit www.thedna.org.

To learn more about the i2Coalition, please visit www.i2coalition.com.


circleid.com | 06-Aug-2020 00:34

Is the Internet Sustaining the Growth Trajectories Observed as the COVID-19 Pandemic Hit the World?

With the COVID-19 pandemic hitting the fifth month of global disruption, many companies have readily shared data, statistics and observational insights on how the pandemic has impacted the global data infrastructure. At DE-CIX, we quickly observed core Internet infrastructure demand increasing and readily reported this data in April of 2020. Microsoft's CEO Satya Nadella remarked to DatacenterDynamics in April of 2020 "we have seen two years' worth of digital transformation in two months."

In May of 2020, Dropbox was quoted in Data Center Knowledge saying: "Another challenge for Dropbox has been the shift of Internet traffic from being highly concentrated in big hubs to a more distributed pattern. Instead of having a lot of traffic coming from a thousand accounts in a university, for example, Dropbox is now seeing all those accounts access its platform from many different places, through many different networks." To address this, Dzmitry Markovich, Senior Director Of Engineering at Dropbox, and his team have been analyzing its last-mile connectivity strategy and actively looking for more last-mile ISPs to peer with. Dropbox already peers "heavily," but it's now investing in even more peering relationships.

As the COVID-19 pandemic continues to affect many industries, including the restaurant, airline and hospitality sectors, to name a few, the Internet continues to be a beacon of hope, maintaining human interactions and education, all while serving as a business continuity solution. If it weren't for today's Internet, the emotional and physical toll on humans would have been even more devastating.

At the onset of the pandemic, the Internet was reinforced as a formidable and reliable connectivity enabler as at-home workers video conferenced daily, families streamed movies and played games online, students engaged with e-learning tools and more.

Five months in, what trends continue to stick? What demand continues to rise? Has the surging growth of the Internet leveled off?

DE-CIX, the operator of the world's largest carrier and data center-neutral Internet Exchange, is observing continued growth and demand with increased connectivity requests to its globally operated Internet Exchanges. The immediate surge in upgrade requests has subsided a bit, indicating a leveling off and quick response taken by core internet players to immediately add capacity in anticipation of user demand. In April 2020, DE-CIX went on record highlighting the need for more bandwidth throughout the world with core networks and edge locations gobbling data feverishly as they helped communities stay connected, educated, productive, informed — and working. This remains the case, and in some 'edge' locations, even more so than before. As the Internet use surge continues, the digital divide is more rampant than ever.

Has Internet Growth Continued to Accelerate? Decline? Or Level Off?

Prior to COVID-19, standard Internet growth of 10-50% was seen across the DE-CIX platform on an annual basis, as reported in the company's 2019 Annual Report. At the start of this pandemic, DE-CIX observed this level of traffic growth (between 10-50% increases) in a matter of days.

It appears that the incremental capacity immediately added at the onset of the pandemic remains to be the foundation supporting and enabling ongoing growth by network providers. Companies that seemingly had excess capacity are now revisiting their projections, adding incremental capacity to support the ongoing usage demand to ensure a reliable and always-on experience for end-users.

Today, remaining prepared for change is at the core of staying ahead of — or out of the path of — disruption as much as possible. While the future may still be characterized by much uncertainty, recording and analyzing this data on an ongoing basis not only shows the world how much had been accomplished, but it offers reassurance that the Internet remains scalable and highly capable, flexing and adjusting to meet the needs of a changed world. Today, we are seeing that the actions taken at the beginning of this disruption are continuing to ensure that businesses and individuals across the globe are empowered by the digital means — and the teams behind them — that are now so central to life as we know it.

Written by Ivo Ivanov, CEO of DE-CIX International


circleid.com | 05-Aug-2020 18:58

Why the Pandemic Makes Domain Names More Valuable Than Ever

In the United States, at least 25,000 brick and mortar businesses will close in 2020 due to the Coronavirus (source: Coresight). I believe this will only be the tip of the iceberg. The businesses that fight to stay alive will become 100% dependent on the Internet to generate their revenue. No longer able to rely on foot traffic to their old brick and mortars, the popularity and brand-ability of their websites will solely dictate their ability to survive in the coming years.

Domain Names Take Center Stage

Since the beginning of the Internet, a domain name has been the online address of a business — nothing more, nothing less. Before the Coronavirus, most brick and mortar businesses kept a wary eye on their SEO, but search engines were only one slice of the pie of their revenue generation. Now, they will be totally at the mercy of Google and, to a lesser extent, Yahoo and Bing.

The Difference Between Success Or Failure

An instantly memorable domain name will be the only protection these businesses will have against their search engine rankings. If your business has to live or die strictly by your search engine rankings, you will die. If a client has to search for your business every time to remember your brand, your company will become 100% search engine dependent. In other words, your domain name, your brand, will feed your competition.

The More Things Change, The More They Stay The Same

It's Marketing 101, but, in the Age of the Internet, many have forgotten that it's not the first time a customer visits your site that's important, but how many times they come back. Of course, your product or service has to be stellar, but they also have to remember your online brand name, your domain name, the instant they read it or hear it without having to search for it. Your domain name will become your most important asset.

More Than Ever, The Name Matters

Buy the best, most memorable and unforgettable domain name you can possibly afford. And, please, make it dotCOM. Don't make your clients have to remember both sides of the dot. It's marketing suicide because it will make your brand twice as hard to remember.

Written by David Castello, Co-Founder at CastelloBrothers.com


circleid.com | 04-Aug-2020 21:53

The Internet Is for All

Detrimental effect of IETF Mandates

Over the past fifty years, participants in what began as the DARPA internet community have been turning out diverse technical specifications for TCP/IP network architectures and services. The first twenty years under government agency sponsorship were marked by rather free-wheeling sharing of ideas and collegial accommodation of divergent views typically found in most professional, academic activities. The work was eventually institutionalized in the form of what are now two venerable legacy bodies — the Internet Engineering Task Force (IETF) that is overseen by an Internet Architecture Board (IAB).

However, during the past twenty years, the encouragement of divergent views and culture of accommodation began to disappear through the adoption of mandates that often expressed divisive socio-political views. A kind of self-similar set of perspectives became embedded through a set of pronouncements that began to dictate what work would ensue and what specifications would be developed and what would be shunned. Those involved believed they had a right to decide and dictate through technical specifications, the capabilities available in the global internet marketplace. Anyone who disagreed was encouraged to go elsewhere, and controls were attempted over work in other venues.

This trend of increasing intolerance is not a good one for the internet community, including the venerable institutions involved, especially during a period of rapid industry and technology change. The behavior is being manifested again through the pursuit of a new draft document entitled "The Internet is for End Users” with prominent acknowledgment given to Edward Snowden as inspiration.

The Internet is for End Users draft document

The drafting of this document began a year ago. It begins with a kind of self-assertion of power that the related standards activities under the purview of the IAB control the marketplace "because the underlying decisions afford some uses while discouraging others." The text underscores the power, stating, "we are defining (to some degree) what is possible on the Internet itself."

These possibilities are enumerated as "it has helped people overthrow governments and revolutionize social orders, swing elections, control populations, collect data about individuals, and reveal secrets. It has created wealth for some individuals and companies while destroying others." However, omitted as to "what is possible" are rather massive criminal, terrorist, and cyberattack activities as well as the organized propagation of hate crimes, racism, and xenophobia. These are not insignificant matters, and the bad comes with the good.

This myopia of possibilities is carried over in treating "who are end users" where the existence of malevolent actors is simply ignored. The reality that "end users" may also include nation-state actors attacking elections in another state is not even considered. Indeed, the percentage of malevolent internet end users is not only very large, but typically includes highly motivated, technically knowledgeable, and frequently well-financed parties and groups and cost the world an estimated $6 trillion. Last but not least, internet-connected organizations are increasingly vulnerable to insider threats. The draft simply asserts that "goal" and "measurable success of the Internet" is to "empower users" — without ever attempting to mention, much less treat the potentially disastrous results of empowerment.

The IAB document asserts that the interests of end-users must be prioritized by the IETF "to ensure the long term health of the Internet." The statement begs basic questions such as how you can ensure anything? What exactly constitutes long term health? What is "the Internet" today? And, what gives the IAB or IETF the right to be making these societal and market choices for the world?

These questions are especially significant in light of "the Internet," being by definition, a virtual construct that makes use of the largely private network resources and end-point hosts of countless companies and homeowners worldwide. Making a choice to favor human end-users is, in fact, highly discriminatory and represents an allocation of resources that are not the IAB's or IETF's right to allocate or ensure.

The draft seeks to implement the proffered prioritization process by "consulting with the greater Internet community." The proposed approach itself, however, is highly discriminatory. It discards the idea that a "government-sponsored" individual could play a role, but accepts that certain "civil society organizations" could be a "primary channel." Not surprisingly, its own financial sponsors are accorded recognition. Edward Snowden is explicitly recognized as an especially important channel. An array of its own existing, highly divisive, one-sided pronouncements adopted over the past twenty years are proffered as dispositive mandates concerning network architectures, filtering, surveillance, and encryption.

At the end of the document, the flippant, if not utterly irresponsible position is taken that "if a user can be harmed, they probably will be, somewhere." It is an amoral abdication of concern and responsibility for what harms that end-users inflict on each other and our societal systems.

The Internet is for All

The concept that "the Internet is for all" is grounded on the reality that it exists as a virtual information network on top of all the shared participant object resources worldwide. All of those participants decide autonomously what they share and to whom and on what basis. While all of those actions are subject to law, no organization has the right or the ability to dictate for those participants worldwide, their basis for sharing — including prioritization. It is the ultimate narcissistic arrogance.

Thirty years ago, when the IAB and IETF operated under U.S. Federal government agencies, it was open, inclusive, diverse, and tolerant. It welcomed, if not encouraged, disparate viewpoints and work. Different protocols were pursued in parallel, and means of interoperation developed. The robust venues were frequently contrasted with more constrained, political, less flexible, slow, and formal network standards bodies. Now — thirty years later — the two standards communities have traded places.

Over the years, the admirable qualities of the IAB and IETF began to erode through two developments. One factor was a kind of institutional distillation of participants and decision-making leadership with largely self-similar views and motivations. Thirty years ago, the IAB composition reflected the principal interested parties in the Internet community. That clearly is not the case today.

Another corrosive development was the adoption of divisive, socio-political mandates for the standards work — some promoted by highly controversial personalities — that have far-reaching consequences and further exacerbated self-similar participant engagement. The frequently highly-charged mandates also had the secondary effect of inciting intolerance of groups and views perceived as hostile. The effects can be seen in the high turnover attendance at IETF meetings in recent years, where nearly two-thirds of the several thousand total attendees over a several-year period attended only once; and descended to only five percent for those who persistently participated.

Useful steps have been taken to remedy some of these institutional challenges and should be expanded. The creation of the IETF Trust and IETF Administration LLCs and expansion of their roles in managing the IAB, IETF and its related bodies can help them become more inclusive and reverse the leadership self-similarity evolution. After many years of silence, the recent public statement on competition law that emphasizes consumer marketplace choice was an extraordinary positive step.

As others have pointed out, pronouncements like The Internet is for End Users are not only far removed from the appropriate role and expertise of the IAB, they also create a toxic collaborative environment that diverges significantly from its original strength — to attract and accommodate multiple academic, government, and industry communities with different protocol and service requirements. Useful next steps would include going back to organizational roots by eliminating the political mandates imposed on the creation of new groups and enable the pursuit of any discussions or work items for which there are some minimal number of participants — as is commonplace in most standards bodies. This should also facilitate collaboration with other standards bodies and enhance IETF viability in a 5G/F5G world. The Internet is for all. The IAB/IETF should embrace that maxim.

Written by Anthony Rutkowski, Principal, Netmagic Associates LLC


circleid.com | 03-Aug-2020 23:46

Bandwidth Needed to Work from Home

The pandemic made it clear that the millions of homes with no broadband or poor broadband were cut off from taking the office or the school home. But the pandemic also showed many additional millions of homes that their current ISP connection isn't up to snuff for working or doing schoolwork from home. Families often found that multiple adults and students couldn't share the bandwidth at the same time.

The simplest explanation for this is that homes were suddenly expected to connect to a school or work servers, use new services like Zoom, or make telemedicine connections to talk to doctors. These new requirements have significantly different bandwidth needs when a home's big bandwidth need was watching multiple video streams at the same time. Consider the following bandwidth needs listed by Zoom:

Zoom says that a home should have a 2 Mbps connection, both upload and download, to sustain a Zoom session between just two people. The amount of download bandwidth increases with each person connected to the call, meaning Zoom recommends 6 Mbps download for a meeting with three other people.

Telemedicine connections tend to be even larger than this and require the simultaneous use of both upload and download bandwidth. Connections to work and schools servers vary in size depending upon the software being used, but the VPNs from these connections are typically as large or larger than the Zoom requirements.

Straight math shows fairly large requirements if three or four people are trying to make these same kinds of 2-way simultaneous connections at the same time. But houses are also using traditional bandwidth during the pandemic like watching videos, gaming, web browsing, and downloading large work files.

The simplistic way to look at bandwidth needs is to add up the various uses. For instance, if four people in a home wanted to have a Zoom conversation with another person, the home would need a simultaneous connection of 8 Mbps both up and down. But bandwidth use in a house is not that simple, and a lot of other factors contribute to the quality of bandwidth connections within a home. Consider all of the following:

  • WiFi Collisions. WiFi networks can be extremely inefficient when multiple people are trying to use the same WiFi channels at the same time. Today's version of WiFi only has a few channels to choose from, and so the multiple connections on the WiFi network interfere with each other. It's not unusual for the WiFi network to add a 20% to 30% overhead, meaning that collisions of WiFi signals effectively waste usable bandwidth. A lot of this problem is going to be fixed with WiFi 6 and 6 GHz bandwidth which together will add a lot of new channels inside the home.
  • Lack of Quality of Service (QoS). Home broadband networks don't provide quality of service, which means that homes cannot prioritize data streams. If you were able to prioritize a school connection, then any problems inside the network would affect other connections first and would maintain a steady connection to a school. Without QoS, a degraded bandwidth signal is likely to affect everybody using the Internet. This is easily demonstrated if somebody in a home tries to upload a giant data file while somebody else is using Zoom — the Zoom connection can easily drop temporarily below the needed bandwidth threshold and either freeze or drop the connection.
  • Share Neighborhood Bandwidth. Unfortunately, a home using DSL or cable modems doesn't only have to worry about how other in the home are using the bandwidth, because these services used shared networks within neighborhoods, and as the demand needs for the whole neighborhood increase, the quality of the bandwidth available to everybody degrades.
  • Physical Issues. ISPs don't want to talk about it, but events like drop wires swinging in the wind can affect a DSL or cable modem connection. Cable broadband networks are also susceptible to radio interference — your connection will get a little worse when your neighbor is operating a blender or microwave oven.
  • ISP Limitations. All bandwidth is not the same. For example, the upload bandwidth in a cable company network uses the worse spectrum inside the cable network — the part that is most susceptible to interference. This never mattered in the past when everybody cared about download bandwidth, but an interference-laden 10 Mbps upload stream is not going to deliver a reliable 10 Mbps connection. There are a half dozen similar limitations that ISPs never talk about that affect available bandwidth.

The average home experiencing problems when working at home during the pandemic is unlikely to diagnose the reasons for the poor bandwidth fully. It is fairly obvious if you have problems with having multiple zoom connections if the home upload speed isn't fast enough to accommodate all connections. But beyond the lack of broadband capacity, it is not easy for a homeowner to understand any other local problems affecting their broadband experience. The easiest fix for home broadband problems is for an ISP to offer and deliver faster speed since excess capacity can overcome many of the other problems that might be plaguing a given home.

Written by Doug Dawson, President at CCG Consulting


circleid.com | 03-Aug-2020 21:52

Trump Wants to Change the Communications Decency Act

Section 230 of the Communications Decency Act (CDA), says that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

The law was passed in 1996 in order to shield ISPs that transported content or platforms that hosted it from lability. Bloggers were not responsible for comments on their posts, YouTube and Facebook were not responsible for things users posted, etc. However, ISPs and content hosts have the right to set their own acceptable-use policies and can label or censor material that violates those policies.

For example, when Donald Trump posted unsubstantiated claims about mail-in ballots on Twitter, they added a fact-checking link reading "Get the facts about mail-in ballots" to the post:

(Note that Trump has tweeted the same claim about mail-in ballots on other occasions, and those were not marked).

Trump's response to Twitter's labeling of his tweet was to issue an executive order on "preventing online censorship" calling for the Federal Communications Commission to clarify that "the CDA does not permit social media companies that alter or editorialize users' speech to escape civil liability." Trump evidently wants to be able to sue Twitter for appending a fact-check link to his post.

This is a familiar Trump tactic, as made clear in the book "Plaintiff in Chief: A Portrait of Donald Trump in 3,500 Lawsuits" by former federal prosecutor James D. Zirin, a Republican. Click here for an American Bar Association review of the book.

The day he issued his executive order Trump, the most powerful victim on the planet, tweeted the following, presumably to justify his action:

I am not a lawyer or an Attorney General, but it seems clear to me that Twitter and others have the right to publish fact-checking material on their websites, and I doubt that this executive order will prevail if challenged.

Written by Larry Press, Professor of Information Systems at California State University


circleid.com | 30-Jul-2020 00:34

2020's New Internet Success – Rejoinder

The posting with a similar name seems a bit contrived by anonymous in some strange attempt to enhance its significance. Many others, including myself, have been discussing this subject for some time. Indeed, a concerted lobbying effort and anti-competitive efforts by legacy TCP/IP internet stakeholders have been really ramped up over the past year to mischaracterize what is occurring. One common feature seems to be indirectly promoting Washington's racist/xenophobic mantras about China. Some of the article's observations are obvious, if not interesting. A few others are just factually wrong. As a result, the article has the look and feel of fake news even if it does raise some good points. In any case, it is worthy of rejoinders.

Historically, it was France that developed the first internet protocol — which was subsequently picked up by the U.S. who made it part of its national standards and introduced the specifications into ITU-T and ISO as internet CLNP — which remain in effect today. CLNP was a better protocol, but alas, was killed off when the U.S. politics changed. What became promoted as TCP/IP was a skunkworks competing protocol developed within DARPA within academic communities that has fundamental flaws and has outlived its life.

As Karl Auerbach notes, there have been and remain many competing internet protocols. The most compelling ones with the greatest industry support are moving forward within 3GPP and the MEF Forum. Starlink is rolling another competitor out for its satellite system. Getting traction on any of them in a global marketplace is the non-trivial challenge.

The really strange part of the article is the "place and timing" section. The ITU-T, with its many groups, is only one of a constellation of venues to float new protocol ideas. What has been presented recently and attributed to "China" is not significantly different than what has been done many times before. Internet pioneer Larry Roberts did something very similar in the ITU-T 14 years ago with some significant buy-in from UK and Asian companies. There simply is no "place and timing." The 2024 date "certain features" stuff is plainly conjecture. The principal venues for new network and transport protocols and services are clearly other venues like 3GPP, ETSI, and the MEF Forum.

The "many benefits" section seems rather sensationalized in a way that enhances xenophobic stereotypes and paranoia. Trusted knowledge of endpoints was a key feature of the U.S. CLNP specifications. The "growing consensus" also seems completely bogus. No one would ever accuse Washington of being "unconcerned," or "taking less interest in Internet Governance." The problem is that Washington is still living in a mythical world of Internet Governance which it created 20 years ago — which is deserving of a J.K. Rowling novel or maybe a computer game.

The "give it time" concluding section does, however, impart a useful admonition — to enable "benefits of competition in matters of Internet protocols and to allow them to flourish." NFV based 5G/F5G enables on-demand instantiation of any architectures and services using whatever transport and network protocols anyone wants to order, create or can sell. MEF 3.0 seems the most attractive. There will be enormous numbers of providers offering these capabilities, not just China.

It is also edifying to note the increasing scholarly research that points out how some things don't change, and that the U.S.–China conflict over markets being played out in standards bodies and political posturing is oddly similar to a German and UK rivalry 120 years ago.

Written by Anthony Rutkowski, Principal, Netmagic Associates LLC


circleid.com | 29-Jul-2020 18:28

Google Announces New Subsea Cable Running Between U.S., U.K. And Spain

Google announces the Grace Hopper subsea cable / July 28, 2020

This morning Google announced a new subsea cable that will link the United States, the United Kingdom and Spain. The cable named "Grace Hopper" after an American computer science pioneer Grace Brewster Murray Hopper will be the new addition to Google's other subsea cables, Curie, Dunant and Equiano connecting far-flung continents along the ocean floor. The company says: "Grace Hopper cable will be one of the first new cables to connect the U.S. and U.K. since 2003, increasing capacity on this busy global crossroads and powering Google services like Meet, Gmail and Google Cloud. It also marks our first investment in a private subsea cable route to the U.K., and our first-ever route to Spain." Also noted: This cable uses 16 fiber pairs incorporating novel optical fiber switching that Google says will have increased reliability in global communications and enable it to better move traffic around outages.


circleid.com | 28-Jul-2020 22:44

2020's New Internet Success

CircleID has taken the rare exception to publish this essay anonymously at the request of the author. The reason for anonymity is not to avoid personal or professional harm, says the author, but to drive a point regarding the critical subject matter discussed.

Chinese technology policy is now more effective even than their naval posture in the South China Sea, and both are playing out in full sunshine. This success is not about the hardware pillar of Chinese tech policy, though: its focus is the structural approach China and, increasingly, other stakeholders are taking to global Internet Governance.

Place and Timing

Late in the Year of the Pig just gone, China's offer of a New Internet Protocol was chewed over in senior-level advisory groups of the International Telecommunication Union (ITU) after which the formal consensus-building process of that UN organization considered the matter in March and again in July of this year. There were a few briefings and workshops to explain the merits of the system in between times, particularly in Africa, where the ground for novelty and inclusion is fertile. From these briefings, it is clear that a timeframe for rolling out an alternative internet protocol is still notional, but that proponents are aiming at 2024. In the meantime, Huawei, at least, is attracting significant interest around the opportunity by announcing that certain features of New IP will be embedded in their networks within the next 10 years, offering all stakeholders time to evaluate and appreciate the new approach.

Many Benefits

The "top-down" New IP system will provide a universally-accessible alternative to the current TCP/IP model that has dominated until now. New IP is proving attractive to governments particularly because it assigns each user of the Internet a unique token for naming and addressing that marks all their activity in cyberspace: an obvious but effective way to limit excessive user privacy and eliminate the wet anonymity that has been characteristic of modern creatives from the authors of the Federalist Papers to JK Rowling. New IP will enable central national authorities to manage the authentication process directly, so that under-resourced governments will now be able totally to deliver on the promise of meting out security among their citizens and anyone they interact with online. In this way, freedom of expression is preserved while creating accountability for that expression to any government that believes it should be punished. As importantly, under this approach e-commerce can also be more effectively surveilled, and web-based innovations or content that build on the work of others can be tracked to their sources and addressed in the way the government of that creator's country deems the most expedient, or effective, or quickest. And finally, there will be a rebalancing of charges within eCommerce as well. With embedded "kill-switch" functionalities the New IP will become a practical tool to enforce new protective tariffs on digital services and for extracting appropriate revenue from multinationals, in time to help answer the call of many governments that are already trying hard to ensure the WTO Moratorium on Electronic Transmission does not get extended.

Growing Consensus

The robust proposals of New IP have currency from more than just the Central Kingdom; in Europe, they enjoy support from Telecom Italia and in Africa by at least eleven governments: Burundi, Côte d'Ivoire, Guinea, Mali, Niger, Nigeria, Senegal, South Sudan, Tanzania, Zambia, Zimbabwe. There is every reason to expect — there's no case being made to the contrary, so consensus can only build — that other European entities and governments, those that enjoy strong partnerships with proponents of New IP, will join their voices to this initiative. The United States, which can be sceptical about Chinese initiatives, is today not engaged here, and it is possible to read from this that the U.S. stakeholders are unconcerned, agree, or just take less interest in Internet Governance.

Root Success

In parallel, Huawei is successfully using its diplomatic and commercial influence to generate further governmental support for the initiative. They have helpfully offered trials of the New IP-enabled applications that are filling actual gaps in many countries: those connected to agriculture, remote education, and artificial intelligence. It is unlikely that such large trials will see a reversion back to use of the old root zone, and so many important industries in the developing world should now become zealous champions of New IP, not just for themselves but in a way that gives them a stake in universalizing the use of the new protocols worldwide.

Give it Time

Over the long run, beneficiaries of the Internet economy may become interested in the possibility of increased compliance costs associated with operating with more than one network. But there'll be time enough to consider the validity of such concerns since any costs will become clear once there are two systems in full operation worldwide. Here too, quiet observation and cautious but patient restraint by those most affected will provide the space necessary for China to demonstrate the benefits of competition in matters of Internet protocols and to allow them to flourish — a thousand such may yet bloom — out in the bright sunshine.

Anonymous, TCP/IP.

Written by CircleID Reporter


circleid.com | 28-Jul-2020 22:23

In China, Email Addresses Are Irrelevant

Great article by the BBC about email vs. mobile apps in China — and why email is losing out to the most popular apps.

It's important for Westerners such as myself to remember that most of the world did not first interact with the Internet via desktop computer. In most emerging markets, people leapfrogged computers altogether on their way to using mobile apps.

"Matthew Brennan, a Briton who has worked in China since 2004 and is a consultant on Chinese digital innovation, says that having an email address in the UK is part of your identity as it's required to register for many online services. In China, however, mobile apps often take precedence and it is possible to do all your online transactions once you are logged into an app with multiple functionality such as WeChat or Alipay (created by online retail giant Alibaba) You can book an appointment, pay for shopping and message your friends all within a single app."

If someone were to ask me to do all of my emailing via iPhone, I would soon look for other ways to communicate.

It's also worth noting that the Internet was Latin-biased from the beginning. That is, the people who created the Internet did not take into the account the many different scripts used around the world. Which is why email addresses were historically limited to Latin-based characters (though there are viable workarounds today, though still not very popular).

And herein lies the most important takeaway for anyone looking to expand into a new market, such as China. It's not simply a matter of localizing your website, particularly if you expect customers to reach out to you via email or Facebook (blocked) or Twitter (also blocked).

You need to understand not just the applications people use but how they use them and why they use them.

Written by John Yunker, Author and founder of Byte Level Research


circleid.com | 28-Jul-2020 21:42

Internet Governance Calendar During the COVID-19 Pandemic

Calendar of Internet Governance Meetings in 2020 – Distribution of 184 meetings in 2020 (Source: PCH Internet governance meetings calendar, https://www.internetmeetings.org)

Co-authored by Bill Woodcock, PCH Executive Director, Gael Hernandez, PCH Senior Manager of Interconnection Policy and Regulatory Affairs and Sara Hassan, PCH System Administrator.

The Internet has become an integral part of our lives. Its growth is dependent upon the interaction of engineers, researchers, and network operators to advance networking technologies, policies, governance mechanisms, and deployment. In this undertaking, the Internet's multi-stakeholder governance has relied heavily upon regular face-to-face meetings and conferences to gather individuals and organizations from numerous participating communities. As the COVID-19 pandemic has closed borders and shut down countries, the Internet community has had to adapt its own mechanisms to address this unexpected challenge.

A Busy Internet Governance Calendar

More than two hundred Internet governance events were organized during 2019 in more than twenty-five countries by the different communities shaping the future of the net. The Internet Engineering Task Force (IETF), for instance, a large international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet, met in Prague (Czech Republic), Montreal (Canada), and Singapore, with about one thousand onsite attendees and eight hundred remote attendees in each meeting. ICANN, the Internet Corporation for Assigned Names and Numbers, which coordinates unique identifiers (protocols, domain names and numbers), organizes three events per year with a regional rotation. Last year, ICANN meetings took place in Kobe (Japan), Marrakech (Morocco), and Montreal (Canada). Similarly, the five regional Internet registries (ARIN, LACNIC, RIPE, AfriNIC, and APNIC) each organizes one or two events per year. The Internet governance forum, regional peering forums, security forums, network operators' group (NOG) events, and IXP events are also examples of the bottom-up and multi-stakeholder governance approach to decision making and policy development in the Internet.

Common to all these events is the gathering of many constituents in a single locale for several days, with conference and work meetings during the day and social activities in the evenings. Many governance events do include aspects of remote participation, such as discussion on mailing lists, but physical interaction and face-to-face discussion can accelerate consensus building, a key decision-making process in Internet governance.

No More Face-To-Face Gatherings, Everyone Stay at Home

The COVID-19 pandemic has led health officials to declare a global health emergency. Most governments have issued strict social distancing measures, including banning gatherings to reduce the risk of contagion. Well over 100 countries worldwide had instituted either a full or partial lockdown by the end of March 2020, affecting billions of people. Gradually, many countries decided to close their national borders, resulting in an unprecedented global lockdown.

These restrictions have had an impact on the meetings and conferences that make up the Internet governance calendar, with events being canceled, limited to teleconference-only, and rescheduled. Significant effort has been invested by event organizers to adapt programs and event dynamics to online-only formats.

Packet Clearing House (PCH), a global non-profit that provides operational support and security to critical Internet infrastructure, maintains a public calendar of Internet governance events that anyone can view or subscribe to.

As we can observe in the histogram, all 103 events planned for March, April, May, June, and July 2020 have been either canceled, rescheduled, or limited to tele-conferencing. The 2020 African Internet Summit, which had been planned to take place in Kinshasa in June, was canceled outright and then rescheduled to take place online in September. The African Peering and Interconnection Forum was rescheduled to next year. Only a few face-to-face events in August and September have not been canceled or postponed. Most events planned for October, November, and December have not yet been canceled, but some such as RIPE 81 will be conducted only online.

Overall, 17.2% of the events have been canceled, and 7.8% have been rescheduled (coded in pink) for 2021. Of the remaining 75% active events, more than half (54.8%) will take place by teleconference (coded with light blue in the histogram).

Virtual Conferences: A Successful Experiment So Far

The COVID-19 outbreak has forced the Internet community to turn to online-only virtual events in order to continue advancing work. Almost half of the scheduled meetings in 2020 are now set to be online. The first organization to do this was ICANN, which was quick to move its 67th public meeting to an online format, on March 7 of this year.

Another side effect of travel restrictions is a surge in a new type of events addressing the global Internet governance community. Webinars, seminars conducted via videoconference, now account for 31.85% of the year's total, and that portion continues to increase as more webinars are scheduled, and remaining face-to-face meetings continue to be canceled. Organizations are increasingly finding ways of doing outreach and garnering participation in the absence of face-to-face meetings.

Although virtual conferences fail to provide some of the opportunities that physical gatherings do, virtualization has allowed total participation numbers to increase. The RIPE 80 meeting, originally scheduled to take place in Berlin over four days in May, was shortened to two days. About 2,000 attendees registered, and 1,148 actually participated, 691 via a dedicated teleconferencing tool and 457 via video streaming on the web. In contrast, the second-largest RIPE conference by number of participants (RIPE 78 in Reykjavík) had 742 attendees. The RIPE Network Coordination Centre had to make changes to the traditional meeting agenda to fit the shortened schedule and reconceive the means of interaction among the session chairs, presenters, and audience. Feedback and discussions on RIPE's mailing lists suggest that the experiment was considered a success, and this positive experience probably helped RIPE decide to hold the upcoming October meeting online as well.

Reducing Barriers to Participation

COVID-19 restrictions have created a great deal of turmoil, but they have simultaneously increased accessibility and reduced barriers to participation in ways that may help mend the rift of the "digital divide" and increase equity for developing nations. Teleconferences enable anyone with a working Internet connection and a computer or smartphone to participate in events on an equal basis. Although the amount of bandwidth available to people in developing countries may still pose an obstacle in some cases, and fluency in the language in which the meeting is held is still necessary for full real-time interaction, many of the greatest impediments have been relieved: often-unavailable visas are no longer required, scarce hotel rooms and expensive and time-consuming air travel need no longer be arranged. Perhaps even more important, although virtualized meetings may require participation at odd hours of the night, they no longer require participants to spend days or weeks away from their jobs and families. Because the vast majority of Internet governance meetings have historically been held in developed countries, and air travel between developed countries is more available and less expensive than air travel to or from developing countries, the burden of participation has been much heavier for those from developing countries.

Internet Governance Events Held in Developing Countries (Source: PCH Internet governance meetings calendar, https://www.internetmeetings.org)

Source: PCH Internet governance meetings calendar, https://www.internetmeetings.org

Because resources are scarcer in developing countries and their currencies often fluctuate relative to those of the countries in which meetings have more often been held, each trip poses a greater financial burden and risk for those traveling from developing countries. And visa restrictions, where they exist, are usually imposed upon developing countries by developed ones.

In these circumstances, we see the increasing prevalence of online meetings as a path to more equitable participation in Internet governance and development. We believe that reducing the artificial barriers to entry will broaden participation, yielding broader dispersion of knowledge, better decision-making, and better outcomes.

Although this "new normal" may not be desirable in all ways, the Internet using the Internet to govern the Internet is a change for the better, one we fervently hope can be made permanent. An Internet designed by all is an Internet that will better serve all.

Written by Sara Hassan, System Administrator at PCH


circleid.com | 28-Jul-2020 18:21

Washington's 5G Mania Endpoint – Global CyberBalkanisation

Over the past two years, governments and foreign intelligence agencies around the world have tried to understand the inexplicable, chaotic, irrational, indeed maniacal 5G policies of the Trump Administration. Revelations by former Trump administration officials and most recently Trump's niece confirm that there is no rational basis for Trumpian positions and policies and that the best response is to recognize that Washington is no longer capable of playing a meaningful role as an architect of international law and standards shaping global 5G communication networks.

The most important question now is, what does this mean for the future of 5G? What will be the effects on the deployment and operation of global communication architectures and services? The answer, in a word, is CyberBalkanisation. It is being rapidly accelerated — not unlike the COVID virus — courtesy of Donald Trump.

The term CyberBalkanisation is not new, even if somewhat esoterically confined to communication theorists, and usually cast as an internet phenomenon. One of the earliest treatments of the subject occurred at a 1997 MIT workshop in the form of a prescient paper Electronic Communities: Global Village or Cyberbalkans? Trumpism's rampant Xenophobia, anti-globalism, and setting people against each other manifested in the form of 5G Mania, have the effect of pouring gasoline on the CyberBalkan fire.

5G Manifestations of CyberBalkinisation

The principal innovation and importance of 5G is the ability to orchestrate virtual architectures and services on demand. It has little to do with radio spectrum or transceivers — which are basically low-margin commodity devices produced in mass by a handful of vendors. Indeed, F5G (non-radio-based network access ports) are bundled into the network architecture. Much of the virtualization work was done over a several-year period in the ETSI Network Functions Virtualisation (NFV) standards group and then moved into 3GPP and several other groups for implementation. More than 300 companies were involved — many U.S. based.

The U.S. government was essentially not engaged in any of the work and didn't have a clue about what was occurring. Anyone interested can go to the highly transparent meeting records and see for themselves. This wasn't especially a Trump malady. The USG for the past twenty years has eliminated its engagement capabilities in global industry bodies except for its fanciful affection for its own native DARPA internet and non-technical academic venues.

However, as the 5G and related specifications began moving to maturity in 3GPP and other bodies, the platform became ensnared within the Trump Washington dysfunctional meatgrinder. It was essentially a Perfect Storm. There were two winds blowing. One was the traditional spectrum lobby, which saw 5G as a goldmine for acquiring additional radio spectrum allocations. The second began as uniquely Trumpian — force China into favorable trade concessions by spinning up xenophobic and racist fears of alleged using 5G equipment and services to "spy on America." In a kind of Trumpian achievement, even Democratic Congressional committees have recently attempted to "outTrump Trump," by painting 5G as a Chinese-led "global conspiracy of digital authoritarianism."

With the CyberBalkinisation train now in runaway mode, the results in 5G terms are dire. Every nation potentially becomes its own island of network architectures, devices and services. Every provider, the equipment and services they offer, and the customer data they keep, get compartmentalized within each country. It is the cyber equivalent of "build the wall." Only human to human communication is allowed internationally, and both endpoint countries apply their law through contracts between the two terminating service providers approved by each national government — as the FCC has begun doing.

In historical terms, Washington's 5G Mania endpoint potentially takes everyone back to the days before the first electrical communication treaty in 1850. Every national network was an island, and telegraph operators copied the messages on paper, which were handed across the border to another operator. The messages were examined for hidden codes. The principal existential question now is how far along that regression path do we collectively go.

In legal terms, the 5G Mania endpoint potentially moves the world back prior to the 1988 Melbourne Treaty and the WTO GATS when CyberBalkinisation was the norm. Underlying international fiber optic cable and satellite communication circuits could not be used to provide services to the public, network equipment, and phones could not be freely moved among nations. The only allowable international services were regulated and carefully controlled voice telephony, data, and network support services.

Indeed, one of Trump's most damaging and likely enduring harms imparted on the U.S. globally is his systematic elimination of international agreements of all kinds, together with the associated intergovernmental systems of cooperation. The result is that there is no basis for any other country to trust the U.S. in any transnational matter. As Mary Trump notes, the U.S. now operates under a leader that will say anything for his own aggrandizement, and there is zero trust in anything said from one minute to the next. The adverse CyberBalkanisation impact is already being felt by Europeans who are demanding their data be removed from the U.S. It represents a new normal. Any residual U.S. trust as a global leader collapsed completely during the Pandemic — especially with Trump's withdrawal from the WHO to promote his re-election. As the polls indicate, the U.S. is no longer regarded as a force for good.

5G CyberBalkinisation's Biggest Loser is the United States

The biggest loser in the Back to the Future world of 5G CyberBalkinisation brought about by Trump and his friends is the United States. Even though the U.S. failed to abide by the Melbourne treaty and GATS, it became by far the biggest benefactor by the late 1990s. Its major companies most valuable strategic assets have long been the ability to analyze, resolve, and propagate tailored content and software on a global scale at high margins from common U.S. based facilities that were TCP/IP centric. The centricity could even be monitored through the CAIDA topology maps.

This U.S. centric configuration also enabled U.S. companies to generate huge amounts of valuable metadata about network endpoints and user behavior by maintaining and processing it at the same common U.S. based cloud data centers free from control. The business advantages in large measure were dependent on avoiding any CyberBalkinisation that were technical or legal based.

The emergence of 5G provided the U.S. with a win-win opportunity — where Huawei and other China vendors earned revenue selling large numbers of low-cost 5G/F5G access boxes worldwide on the periphery of 5G networks, while U.S. vendors earned revenue selling tailored virtual network and service orchestration services and content delivery from cloud data centers. Somewhat resembling Amazon, U.S. companies provided the consumer goods while China provided the trucks.

However, the clueless, clumsy antics of Trump and his supporters (they don't deserve to be called strategies), have collectively taken us back to a world of CyberBalkinisation. The Administration's combination of upending legal domestic and international systems combined with unending pretexts for attacks on China and Chinese companies, now place the U.S. in its own insular 5G Balkin State.

5G CyberBalkanisation, combined with other Trumpian machinations, profoundly and adversely affect both U.S. users and companies, as well as its national functional capacities.

U.S. end-users and companies are already denied access to some of the most advanced 5G network equipment and mobile devices, and the costs of what is available in the U.S. are increased. Meanwhile, most of the rest of the world will have the advantage of widespread deployment of the most advanced equipment and devices, including new services instantiated using them. The U.S. global 5G market share will shrink and disencent hardware vendors (who are almost entirely offshore) from aggressively pursuing its market. Trump's treatment of Huawei is essentially a warning to others that a dysfunctional U.S. president's vicarious ire expressed in a tweet can result in being summarily banned from the national market.

The harm to U.S. consumers is also significantly exacerbated by the utter lack of any national 5G cybersecurity oversight or requirements. Almost alone in the world, U.S. government agencies today, with only a few minor exceptions, do not even engage in industry bodies to understand and help establish 5G security requirements. The engagement of U.S. companies in those bodies has also been minimal because there is little incentive anymore to expend the resources. The contrast is dramatic compared with vendors from other countries — especially China — who have substantial incentives to demonstrate significant attention to cybersecurity and participate significantly. The disparity is striking in almost every industry 5G technical venue today.

The harm to U.S. companies in strategically competitive areas such as network and service instantiation, and content delivery and analysis is even more dramatic. Many are already establishing more offshore subsidiaries and data centers all over the world outside the U.S. In a CyberBalkinized world, "what happens in the CyberBalkin State, stays in the CyberBalkin State." This shift in network and service architectures is also supported in some of the new 5G technologies such as Multi-access Edge Computing (MEC) and the shift to low-latency non-IP protocols being rolled out by MEF Forum. Being successful in the rapidly emerging 5G world will increasingly depend on a company's understanding of and engagement in the fundamental industry platform evolutions and the ability to deploy them in multiple markets worldwide.

The harm to U.S. national functional capacities is diverse and pervasive and will be felt for years. Shutting off the admittance of scholars and foreign expert employees deprives U.S. academic institutions and companies of innovative and enthusiastic young talent from around the world who bring fresh perspectives to the work. Trump's draconian exercise of CFIUS and Export Administration Regulation (EAR) powers similarly deprive the nation of the ability to promote a competitive industry the global 5G marketplace and drive the U.S. into becoming a CyberBaltic State. Withdrawing from international agreements and activities to the point of eliminating the institutional and staff expertise throughout the Federal government completes the encapsulation and isolation by preluding even the most basic understanding about what is occurring in the 5G world.

The sum of all harms being inflicted by Washington 5G Mania is undoubtedly exceeded by those inflicted on the health of U.S. citizens, the national economy, and the environment. U.S. recovery will take many years — even with a new Biden Administration, which will have its own reconstruction challenges. In the meantime — as in many other matters — the consensus among analysts is that Europe is emerging as the trusted global leader presiding over an increasingly CyberBalkanised world.

Written by Anthony Rutkowski, Principal, Netmagic Associates LLC


circleid.com | 27-Jul-2020 20:41

U.S. Department of Energy Unveils Blueprint for the Quantum Internet

Argonne Director Paul Kearns speaking during a press conference on Thursday at the University of Chicago.

In a press conference on Thursday at the University of Chicago, the U.S. Department of Energy (DOE) unveiled a report that outlines a blueprint strategy for developing a national quantum internet. Currently, in its initial stages of development, the quantum internet could provide a secure communications network and is believed to have a significant impact in areas critical to science, industry, and national security. "Scientists now believe that the construction of a prototype will be within reach over the next decade," says the Department of Energy. Some of the unique foreseen benefits of a quantum internet:

Quantum transmissions are exceedingly difficult to eavesdrop on as information passes between locations. Scientists plan to use that trait to make virtually unhackable networks.

Quantum internet could potentially expedite the exchange of vast amounts of data. According to the report, if the components can be combined and scaled, society may be at the cusp of a breakthrough in data communication.

The creation of ultra-sensitive quantum sensors could allow engineers to better monitor and predict earthquakes or search for underground deposits of oil, gas, or minerals. Such sensors could also have applications in health care and imaging.

Crucial steps toward building such an internet are already underway in the Chicago region, one of the leading global hubs for quantum research. In February of this year, scientists from DOE's Argonne National Laboratory in Lemont, Illinois, and the University of Chicago entangled photons across a 52-mile "quantum loop" in the Chicago suburbs, successfully establishing one of the longest land-based quantum networks in the nation. That network will soon be connected to DOE's Fermilab in Batavia, Illinois, establishing a three-node, 80-mile testbed.


circleid.com | 24-Jul-2020 21:31

What Trademark Owners Need to Know to Avoid Reverse Domain Name Hijacking

Co-authored by Ken Linscott, product director, Domains and Security Natalie Leroy, senior IP advisor at CSC.

A cybersecurity company recently attempted reverse domain name hijacking for an exact match domain name of its brand, and in so doing, failed in both its bid to take ownership of the domain and potentially damaged their reputation by using this somewhat nefarious tactic and abusing the Uniform Domain Name Dispute Resolution Policy (UDPR) process1.

What is reverse domain name hijacking?

Reverse domain name hijacking, commonly known as RDNH in domain name dispute cases, occurs when a trademark owner attempts to secure a domain name by falsely making claims of cybersquatting against a domain name owner.

This is unlike domain name hijacking, which is usually associated with cybercrime where the domain name is stolen through unauthorized access to the domain management account, or domain name system (DNS) hijacking where the name servers for a domain are changed through similar unauthorized access.

In other words, RDNH is where a trademark owner uses UDRP proceedings to coerce an individual domain owner into surrendering their rights to a domain name. This tactic is in breach of the rules, which clearly state that the complainant must certify that they are not using the process improperly as a means to harass a domain holder, and that they are acting in good faith with reasonable argument.

"If after considering the submissions, the panel finds that the complaint was brought in bad faith, for example, in an attempt at reverse domain name hijacking, or was brought primarily to harass the domain-name holder, the panel shall declare in its decision that the complaint was brought in bad faith and constitutes an abuse of the administrative proceeding." [Internet Corporation for Assigned Names and Numbers' (ICANN) Rules for Uniform Domain Name Dispute Resolution Policy (Rules), Paragraph 15(e)]2

It's therefore important that companies fully understand the dispute resolution process in detail to avoid being found to have used this tactic. Furthermore, because cybersquatting is on the rise and third-party registrants are continuing to extort money from legitimate businesses and trademark owners, companies need to understand how to avoid a panel issuing a finding of RDNH against them.

Although a finding of RDNH does not carry a financial penalty, it will go on the public record and taint any future complaint. Panels are also usually quite ruthless in their choice of words, and RDNH is newsworthy, which may lead to reputation damage for a complainant found guilty of RDNH. Finally, RDNH is an offence under the Anti-Cybersquatting Consumer Protection Act, so that U.S.-based domain name owners may sue in a District Federal Court for damages up to $100,000.

What can companies do to avoid attempted RDNH?

Specifically, in the case of a trademark composed of generic words, or when the domain name has no content, there is an increased risk of the respondent calling for a ruling of RDNH. Trademark owners with sufficient grounds, or who do not display bad faith, are able to avert a RDNH ruling against them. (See case examples: D2007-0965 and D2018-0235.)

We recommend trademark owners:

  • Make sure the trademark or the rights predate the domain name registration or acquisition by the last registrant (in case a domain name has changed hands). If prior rights can't be proven, it will be difficult to claim "bad faith," since under the UDRP, this hinges on registration and use.
  • Document how the trademark was known at the time the disputed domain name was registered; how well it's known now is irrelevant if the domain name is 20 years older.
  • Substantiate claims. Don't make allegations in one's favor or discredit the respondent without evidence.
  • Be honest with the panel. If there was an attempt to buy the domain name from the registrant before starting the UDRP, say so. No panel is going to blame a complainant for trying to recover a domain name more quickly or cheaply than a UDRP, however they're not going to be impressed if a complainant says the respondent tried to sell the domain name to them for an unreasonable fee if the complainant initiated the discussion.
  • Avoid blatant attempts to entrap the respondent or mislead the panel, for instance, only putting forward incomplete material evidence, details of which then come to light when the registrant files their response.
  • Finally, carefully consider how and who to trust to file disputes. A boilerplate approach without careful consideration of the facts risks leaving the trademark owner open to not just a loss, but perhaps the accusation of below-the-belt tactics.

  1. https://domainnamewire.com/2020/06/02/siemplify-reverse-domain-name-hijacking/ 
  2. https://www.icann.org/resources/pages/udrp-rules-2015-03-11-en 
  1. This article originally published on Digital Brand Insider.

Written by Ken Linscott, Product Director, Domains and Security at CSC


circleid.com | 24-Jul-2020 18:50

5G Carriers Hoping for Handouts

The Information Technology Industry Council (ITI) published a recent report that looks at "5G policy Principles and 5G Essentials for Global Policymakers." For those who don't know ITI, they are a DC-based lobbying group that represents most of heavy-hitter tech firms, and which works to help shape policy on tax, trade, talent, security, access, and sustainability issues. I don't think I've seen another document that so clearly outlines the hopes of the big US cellular companies.

The paper makes specific policy proposals. In the area of innovation and investment, the paper proposes that the government provide incentives for 5G research and development. It asks governments to support open and interoperable network solutions so that 5G technology works everywhere — unlike with 4G where US cellphones don't work in Europe. ITI warns that the industry will need a lot more datacenter technicians, cloud administrators, and cybersecurity experts and asks governments to invest in workforce training. Finally, it asks for the free flow of data across borders.

In the area of 5G deployment, the report recommends freeing up more spectrum for 5G. The report also recommends harmonizing spectrum bands around the world to help make handsets universally usable. There is a recommendation to use targeted government funding to complement private sector investment in 5G. Finally, the report asks for governments to force local siting and licensing reform to speed up 5G deployment.

In the area of 5G security, the paper promotes the idea of supply chain security to 'consider the geopolitical implications of manufacturing locations' (keeping out the Chinese). The ITI also suggests that the government and industry must share responsibility and collaborate on security.

Finally, in the area of standards, the ITI asks that governments avoid promoting country-specific standards to promote worldwide interoperability — something we failed to do with 4G. The paper suggests that governments should encourage consistent industry engagement in worldwide efforts to create standards.

The paper is titled to suggest that it is a list of policies to be pursued globally. But once I digested all of the recommendations, it's clear that this is a paper intended to influence U.S. policymakers. Some of the recommendations, such as pushing federal solutions to override local barriers to 5G deployment, are strictly U.S. issues. Most of the countries around the planet rely on cellular broadband as the primary source of connectivity, and in most countries, the rules are already slanted in favor of allowing wireless deployment.

If there were any doubts that this piece is sponsored by the big carriers, the paper ends with a summary of the conclusions of a 2018 report from Accenture that was published at the height of the 5G hype. That paper claims that "In the United States alone, 5G is expected to generate up to $275 billion in infrastructure investment, thus creating approximately three million new jobs and boosting GDP by $500 billion annually."

The current reality of the 5G industry is already vastly different than that 2018 vision. Over the last few years, the big telcos have laid off many tens of thousands of workers and are heading in the exact opposite direction as suggested by the quote. In a recent blog, I noted that the cellular companies are still struggling to define an economic business case for 5G. At least for now, this doesn't feel like an industry headed for those lofty goals.

The paper goes on to make huge claims for 5G. For instance, the paper claims that 5G has the ultimate capacity to deliver 20 Gbps broadband speeds. That's such an outlandish claim that there is not much that can be done with it other than an eye-roll.

The paper also touts that 5G will ultimately be able to handle up to 1,000,000 separate connections to devices in a square mile from a single transmitter. If that claim was realistic, I have to wonder why the carriers are bothering to build small cells if a single cell site will have that much capacity. That paper also envisions a world where every device in our lives is connected to a 5G data plan so that we have to pay to connect devices. That ignores the reality that WiFi has already won the connectivity battle and that WiFi will be magnitudes better with the introduction of WiFi 6 and the 6 GHz spectrum band.

This is an industry piece aimed at persuading legislators that 5G is an amazing technology — the paper stops just short of claiming that 5G can leap over tall buildings in a single bound. However, most of the paper also paints a picture of an industry that wants big government handouts to achieve the technology goals. The recommendations in the paper ask for government financial help for training staff and ask for subsidized R&D. The paper also wants government help in eliminating regulation and squashing any local input into the placement of cell sites. It's hard to understand why an industry that is going to conquer the world and create $500 billion in annual GDP, as this paper suggests, would need so much government help.

Written by Doug Dawson, President at CCG Consulting


circleid.com | 24-Jul-2020 05:03

The State of DNS Abuse: Moving Backward, Not Forward

ICANN's founding promise and mandate are optimistic — ensure a stable and secure internet that benefits the internet community as a whole. Recent months, however, have highlighted the uncomfortable truth that ICANN's and the industry's approach to DNS abuse is actually moving backward, ignoring growing problems, abdicating on important policy issues, and making excuses for not acting. Further, the impending failure of ICANN's new WHOIS policy to address cybersecurity concerns will add fuel to the fire, resulting in accelerating DNS abuse that harms internet users across the globe.

ICANN, though, has an opportunity here to not disappoint its community by taking courageous steps toward doing the right thing about DNS abuse. First, it needs to fully enforce its contracts with those registries and registrars that routinely harbor bad actors and have excessive rates of abuse. It should also demand that any new WHOIS policy helps, not hinders, cybersecurity professionals mitigating DNS abuse in a timely manner.

DNS abuse still grows without check in the face of COVID-19

DNS abuse growth continues unabated and the community sectors concerned with abuse have urgently expressed their worries for some time now. The Business Constituency (BC) sounded this alarm last fall and others — including the GAC — are on record with impatient statements to ICANN that abuse really can't be ignored.

COVID-19 scams have magnified the problem. Criminal opportunists, to no one's surprise, are exploiting public fear and leveraging the DNS to lure victims. WIPO documents a surge in cybersquatting case filings and, according to the National Association of Boards of Pharmacy, "rogue pharmacy" scams — which now are pushing unproven COVID-19 treatments — are rampant at domain names sponsored by notoriously lax registrars. Google reported a dramatic surge in COVID-19 related abuse, citing 18 million daily malware and phishing emails related to COVID-19 during one week in April.

Even more recently, registry provider Neustar reports "an increase in the overall number of attacks as well as in attack severity . . ." In addition to noting that it has "mitigated more than double the number of attacks in Q1 2020 than in Q1 2019," Neustar also reported "an increase in DNS hijacking, a technique in which DNS settings redirect the user to a website that might look the same on the surface but often contains malware disguised as something useful."

Law enforcement has taken notice, of course. According to the FBI, reports received at its Internet Crimes Complaint Center more than doubled in April — reports of crimes that resulted in hundreds of millions of dollars of damage.

COVID-19 Response: Law Enforcement Perspective (Source: FBI)

While a few responsible registrars and registries have recently addressed abusive COVID-19 domain names in coordination with law enforcement, this response was not universal. Voluntary frameworks do not replace ICANN's responsibility to ensure that all registrars and registries participate in DNS abuse mitigation efforts, as requested by a growing consensus of stakeholders.

Warnings from ICANN's Stakeholders Ignored

The BC wasn't the first to raise the red flag on DNS abuse. Look back in time — in this instance, almost five years — and one can see abuse has been the subject of countless forms of advice from experts from the security sector, governments, community members and others exercising their mandate under the Bylaws to advise the ICANN Board.

DateMessageJanuary 2016 – SSAC (SAC77)ICANN should collect and disseminate information about known categories of how domain registrations are used for abusive and fraudulent purposes.November 2016 – GAC (Hyderabad Communique)GAC questions Board on ICANN's plans for abuse mitigation.June 2018 – SSAC (SAC101)Security practitioners' and law enforcement's ability to mitigate cybercrime and DNS abuse has been negatively affected.September 2018 – CCTRT Final ReportICANN Org should work with registries and registrars to add provisions to contracts aimed at preventing DNS abuse.October 2018 – GAC (Barcelona Communique)Not having reasonable access to WHOIS data is prolonging the exposure of victims to crime and abuse.October 2018 – SSAC (SAC 103)SSAC recommends requirements for new gTLDs include robust abuse mitigation measures.December 2018 – SSAC (SAC 104)The current lack of definition of reasonable access impacts the ability of security actors to fight abuse and cybercrime.September 2019 – GAC (Statement on DNS Abuse)Protecting the public from security threats and DNS Abuse is an important public policy issue.November 2019 – GAC (Montreal Communique)The Board shouldn't proceed with a new round of gTLDs until after implementation of recommendations on DNS abuse mitigation.December 2019 – ALACDNS Abuse is a key factor eroding confidence in a single, trusted, interoperable Internet.March 2020 – SSAC (SAC 110)It's clear the domain name system is under continual pressure from various forms of abusive and fraudulent behaviours, and the position is not improving.March 2020 – GAC (ICANN67 Communique)GAC reiterated previous advice calling for implementation of community recommendations in light of previous advice on abuse mitigation.June 2020 – GAC (ICANN68 Communique)Governments, ICANN, and the Community must take a multi-pronged approach to combating DNS abuse.

Yet, the ICANN Board has largely ignored calls for action.

ICANN Org has facilitated a lot of talking — it scheduled a cross-community discussion on abuse during its Montreal meeting last November and another one during its virtual meeting in June. Between those meetings, though, the ICANN Board responded with a wary letter to the BC defending its ticketing record and only this May, through a memorandum of understanding (MOU) with FIRST, seemingly acknowledged the rampant abuse problem and the need to do more than simply rely on best practices offered up by its contracted parties.

However, we're left with no tangible result from these discussions, except the insistence by ICANN Org leadership that anything related to fighting abuse must come from the community — a community where parties with outsized influence block meaningful anti-abuse measures.

The Ball is in ICANN's Court

If nothing changes, the pattern will continue, DNS abuse will persist as it has, and policy groups will continue to punt on new DNS abuse requirements, despite objections. ICANN Org must break out of its rut and secure real tools for mitigating abuse, which includes a robust WHOIS system to identify and proactively respond to DNS abuse. The current proposals by an expedited policy group (known as the EPDP) that refuse to treat phishing-related WHOIS requests with urgency are woefully inadequate (for example, responses to queries can be expected within ten business days). Phishing attacks are mitigated in hours, not days, to protect people from identity theft and financial ruin. This is just one of many problems with the new EPDP WHOIS policy to be shortly teed up for approval.

The ball is now squarely in the Board's court to demand that ICANN Org show leadership and do what it is supposed to do as an accrediting body meant to oversee the DNS. While confidence in ICANN's capabilities continues to erode, there's still an opportunity to remedy things for the better — it requires leadership, a firm direction, and community collaboration, but it's not too late to act.

Written by Mason Cole, Internet Governance Advisor at Perkins Coie


circleid.com | 23-Jul-2020 20:08

Senate Report on 5G: Recipe for Disaster

The Democratic Staff Report Prepared for the use of the Committee on Foreign Relations United States Senate, July 21, 2020, entitled "The New Big Brother," is actually all about 5G technology. The report jumps on the runaway anti-China train chaotically flailing around Washington these days to "out-Trump, Trump." It characterizes 5G technology, longstanding international collaboration, and COVID-19 tracking as all part of a global conspiracy for "digital authoritarianism" run out of Beijing. The proposed recommendations call for removing the U.S. from the real world to a 5G Fantasia by creating an "American 5G telecommunications alternative" that would consist of an "Industry Consortium on 5G" combined with a "Federally Funded Research and Development Center (FFRDC) on 5G" — which are not only preposterous but also recipe for utter disaster. It is the 5G equivalent of "drink disinfectant to cure COVID-19." Ironically, it also bears a resemblance to the looney Trump NSC proposal in 2017 to create a U.S. Ministry of 5G Telecommunications.

Everything occurring in the 5G space in Washington today is utterly bereft of understanding of the 5G subject matter and driven by political abstractions and the politics of jingoism and xenophobia. The different branches of government, agencies, and K-street non-profits — spanning both political parties — are trying to outdo each with ever more outlandish assertions and proposals. As part of the game, each is also trying to ensnare potential offshore partners in a 5G dance of the loon. Fortunately, countries who have a better understanding of the subject matter and less vulnerable politicians have rejected Trump's overtures. Banning is a "lewser" strategy that only begets less security and a failure in the global marketplace.

A Short Overview of the Senate Report

If you jump to Annex 2 of the report — which is devoted to "the United States and 5G" — you can get a sense of what the Report authors know about the subject matter — which is basically nothing. It largely consists of radio spectrum lobbying material floating around Washington for the past several years. The word "virtualization" appears nowhere, and indeed the basics of 5G architectures, services, and protocols are not even mentioned. The entire conceptualization of 5G revolves around local political mantras citing concerns raised by "former military leaders." Most of the discussion and citations revolve around the radio spectrum, which is the least significant 5G development component. The only fact that they get right is that the U.S. has no radio access network transceiver vendors — which is almost irrelevant and of minimal strategic interest to the U.S. They obviously did not read the 5G Primer or examine any authoritative source materials.

Where the report really goes off the rails is the section on International Standards-Setting Bodies. It only treats two of the many bodies involved — 3GPP and the "International Telecommunications [sic] Union." The report blathers endlessly about the fact that the current ITU Secretary-General, Houlin Zhao, happens to be from China — ignoring the fact that ITU is a federation of bodies and the Secretary-General just runs the General Secretariat. As a kind of new low in despicable disparagement, the report describes Sec-Gen Zhao solely as "a former delegate at the Designing Institute of the Ministry of Posts and Telecommunications of China." It fails to say that he held this position as a young engineer almost forty years ago before he came to the ITU and served admirably for the past four decades as a CCITT study group support engineer, Director of the Telecommunication Standardization Bureau and Deputy Secretary-General. He has devoted his career to furthering global communications for every nation.

The pièce de résistance of this Report is the Conclusions and Recommendations section. It begins with a Trumpian conspiracy theory — 5G is all about a "digital authoritarianism" model perpetuated by China. It even contains an echo of McCarthyism, claiming that "the United States is now on the precipice of losing the future of the cyber domain to China."

The proposed recommendations would be a disaster for the U.S. It begins with the bogus assertion that "United States lags behind China in developing and deploying cutting-edge 5G technologies." It then leverages that assertion to saw off the U.S. from the rest of the world — calling for a U.S. 5G developed through a 5G Federally Funded Research and Development Center (FFRDC) "to surpass China," coupled with Industry Consortium on 5G "comprised of leading U.S. telecommunications and technology companies that would be mandated to create the American 5G telecommunications alternative."

It polishes off these absurd suggestions with several additional worthless proposals. How about one for money for "RAN technologies" and for a "5G Policy Coordinator within the White House." The former ignores that the most revolutionary component of 5G is Network Functions Virtualisation (NFV), and that Trump has had a 5G Policy Coordinator for the past three years. The principal need here is a modicum of Washington 5G cluefulness, not another warm body occupying the EOP.

Then the report asserts that "China is a leading developer and exporter of surveillance technologies" (along with many other countries, including the U.S.), and calls for pouring money into a "Digital Rights Promotion Fund," plus an "International Digital Infrastructure Corporation" to help sell "U.S.-made digital infrastructure," and an "Open Technology Fund." Rather amusingly, the last idea was actually implemented several years ago and helped further Assange's Wikileaks.

Potential hope with Biden

The impending Biden Administration will inherit a Post Pandemic and dysfunctional national government in six months, and faces challenges of monumental proportions on all fronts. On the 5G and related China political front, the Biden planners will also face being sucked into the bottomless Trump xenophobia whirlpool as one of the few cards left to play. The Senate 5G Report, unfortunately, is a depressing example of how "know nothingism" can span political parties, in Washington's own version of Dumb and Dumber.

A post-Trump strategic plan for the U.S. should consist of three objectives. First is to replace the xenophobic banning policy with one that establishes an effective, fungible, non-dominant balance of all vendor products and services in the nation's infrastructure similar to that pursued by most of Europe. Second is to create an effective, global standards-based security regime via existing global industry bodies for hardware and software that consists of a combination of rigorous initial type approval and testing, combined with continuous monitoring for threats and rapid remediation of discovered vulnerabilities. Third is to re-build and facilitate the effective engagement of both U.S. government agencies and industry and personnel in all the international collaborative bodies as peers — especially those focused on national security and infrastructure protection capabilities. The U.K. NCSC provides a good Western model for what is possible. China — whose effectiveness has been largely due to its emulating former U.S. global strategies — is actually a good example of how to effectively participate in and contribute to 5G work.

These three steps also need to be accompanied by a knowledgeable understanding of the profound changes in underlying major evolution in network technologies, architectures and services. 5G radio technologies are a minor part of this evolution. The most significant changes revolve around virtualization of network components, architectures, and services, and the ability to orchestrate them on demand. Legacy internets disappear, and new low latency network protocols and network capabilities emerge.

Hopefully the Washington 5G loons will find a home on another planet.

Written by Anthony Rutkowski, Principal, Netmagic Associates LLC


circleid.com | 23-Jul-2020 01:16

OneWeb Rises From the Ashes – Maybe

OneWeb launching satellites aboard a Soyuz launch vehicle from Baikonur Cosmodrome, Kazakhstan. Lift-off occurred on March 21stat 17:06 UTC. (Photo: OneWeb)

A consortium of the UK Government and Bharti Enterprises bought bankrupt OneWeb, a company that had raised $3.2 billion and had acquired valuable spectrum rights, for $1 billion. That is a good start, but a BBC article says experts believe that at least $3 billion is needed to complete the OneWeb constellation.

Will they make it?

The UK government will be a source of further funding. OneWeb's primary goal is closing the digital divide by bringing broadband connectivity to rural areas around the world, including, of course, the UK. That is obvious, but the UK government has other hopes for OneWeb. One frequently mentioned application is global positioning, navigation, and timing (PNT).

With Brexit, the UK lost access to the secure, encrypted Public Regulated Service (PRS) of the European global navigation system, Galileo and the possibility of equipping OneWeb satellites for secure, encrypted PNT has been suggested as an immediate application. Tyler Reid and his colleagues showed that OneWeb satellites could provide excellent PNT performance if they reset relatively cheap atomic clocks once per orbit using the precise clocks of a civilian global navigation system and, while PRS is reserved for European Union governments and defense users, the UK retains access to Galilieo's public civilian service. (Reid is co-founder of Xona Space Systems which plans to offer precision PNT service using a constellation of small satellites).

The UK expects OneWeb to be profitable. Science, research and innovation minister Amanda Solloway said "This investment is likely to make an economic return, with due diligence showing a strong commercial basis for investment" and she added that "The deal contributes to the government's plan to join the first rank of space nations, and signals the government's ambition for the UK to be a pioneer in the research, development, manufacturing, and exploitation of novel satellite technologies enabling enhanced broadband through the ownership of a fleet of low-Earth orbit satellites." Perhaps the OneWeb investment will encourage efforts like this potential ground-station service.

What about Bharti? Bharti Airtel is India's second-biggest telecommunications firm, holding about a third of its market with 320 million customers and they are Africa's second-biggest mobile operator with more than 100 million subscribers across 14 countries. They also offer Internet service in Sri Lanka, Bangladesh, and the Channel Islands. They obviously bring marketing and operating experience and a distribution channel with terrestrial Internet partners and government regulatory bodies in underserved nations to the new OneWeb consortium.

They also bring deep pockets. Bharti Enterprises is a global conglomerate with interests in telecom, insurance, real estate, education, malls, hospitality, agriculture, food and other ventures. Their ISP business in India faces fierce competitors, and they obviously believe in diversification. (They were previously an investor in OneWeb).

When they filed for bankruptcy, OneWeb attributed their failure to the COVID-19 pandemic, but the handwriting was on the wall before that. In Senate testimony on October 25, 2017, OneWeb's Greg Wyler said they would launch their first ten satellites in May 2018, offer service throughout Alaska by 2019 and cover the entire US in 2020. While they had 74 satellites in orbit by the time of their bankruptcy and had signed an ISP distributor for Alaska and Hawaii, they were not offering service in Alaska or anywhere else let alone covering the entire US and were having problems with Russian launch and distribution partners. Furthermore, SpaceX was launching more satellites each month than OneWeb had in orbit, and their launch cost was significantly lower. OneWeb was in serious trouble and having trouble raising capital with or without COVID-19.

Now OneWeb has the backing of a government and a strong developing-nations partner and I assume their deals in Alaska and Hawaii and other previous arrangements with maritime companies, airlines, and other nations remain in place. On the other hand, they need to launch satellites quickly and they face stiff competition. SpaceX has a clear launch advantage, Amazon and China have deep pockets, and Telesat has a geostationary-satellite base as well as assets in the north.

I don't know if they will make it, but I hope they do. Billions of people remain to be connected to the Internet, so there is room for all of these companies and competition is healthy.

Written by Larry Press, Professor of Information Systems at California State University


circleid.com | 22-Jul-2020 22:10

Use of IP Geolocation in Threat Intelligence and Cybersecurity

There is no denying that we need all the help we can get as cyberattacks evolve. IP geolocation data is among the most useful threat intelligence sources that can strengthen an organization's cybersecurity posture. Primarily, tools such as IP Geolocation Database or its API counterpart can help us map the location of a device or user. However, more than that, they can help prevent prevalent cybercrime.

In this post, let us dive into three uses of IP geolocation in the field of cybersecurity.

Prevent Phishing Emails from Reaching Staff Inboxes

Phishing is still a rampant cyberattack type, and everyone is susceptible to it. It can also come in several forms — aside from the original phishing email attack, we also need to be wary of vishing and smishing.

Consider a newly hired employee who received what seemed to be a welcome email from Salesforce Onboarding. Clicking a link that says, "Learn about your benefits," redirected him to an unknown website. What happened? The new employee just unknowingly installed a keylogger when he clicked the link. The victim does not have to be a new employee, in any case. Even tenured staff can be lured into "learning about their benefits."

However, companies can minimize the phishing risks if they integrate IP Geolocation API or use its database version alongside their email security solutions. When the employee in our hypothetical scenario receives an email from Salesforce, for instance, the company can set its email security tool to automatically run the IP address on IP Geolocation API to check if it truly belongs to Salesforce.

For example, if the email is from the IP address 150[.]129[.]8[.]34, IP Geolocation API would alert the recipient and the security team that it is not associated with Salesforce. Further investigation would also reveal that the IP address has been reported 174 times for abusive activity.

Compared to the result, when a Salesforce IP address is run on the tool, you would see glaring differences.

Furthermore, the organization can check against the IP Geolocation Database to see which IP addresses are used by Salesforce and add these to their whitelist.

Minimize Card-Not-Present Fraud

Phishing attacks can also lead to stolen credit card and bank information, which would end up for sale on the Dark Web. The availability of these financial details makes it easier for threat actors to commit card-not-present (CNP) fraud. However, if merchants and card companies employ IP geolocation tools, they can prevent such fraudulent transactions from taking place.

For example, the IP geolocation tool would tell a credit card company that a transaction coming from IP address 45[.]143[.]221[.]54 originated in Nuremberg, Germany. The credit card owner, however, has never made any purchase outside the U.S. The credit card company's anti-fraud solutions would then alert the merchant of the suspicious transaction, and it would be declined. On the other hand, the credit card owner would also receive an alert to confirm if he or she made the transaction.

As such, merchants and credit card companies that use IP geolocation data in their fraud protection solutions can prevent CNP fraud.

Implement IP-Level Blacklisting

Organizations can stop suspicious IP addresses from repeatedly attacking by adding them to their blacklists. Companies would be better off blocking the IP address 45[.]143[.]221[.]54, for instance, as it has been reported 1,266 times for a wide range of malicious activities.

There are instances, though, where IP-level blacklisting can lead to blocking innocent and even useful domains. Several domains use shared IP addresses and may just happen to share one with a malicious domain. So, before blocking an IP address, it is best to check against the IP Geolocation Database to ensure that you are not blocking valuable domains.

Conclusion

Threat actors do not care where they commit crimes as long as they can gain something from it. Through phishing campaigns, they can gain valuable data that they can sell on the Dark Web or use to commit fraud. Nevertheless, these crimes are preventable with the appropriate security measures and tools.

IP Geolocation Database and API are two programs that can provide IP intelligence, which can help enrich security systems, whether they are email security solutions, fraud protection programs, or other cybersecurity tools.


circleid.com | 22-Jul-2020 19:50

i2Coalition and The Domain Name Association Announce Their Intent to Merge

The Internet Infrastructure Coalition (i2Coalition) and The Domain Name Association (DNA) have announced their intent to merge, forming the largest Internet infrastructure advocacy group in North America. The combined association will operate under the name i2Coalition and maintain the i2Coalition's existing organizational and management structure. "The merger of our organizations underpins the mission of both the DNA and the i2Coalition, combining our mutual dedication to Internet industry best practices and policies to empower continued growth. Combined, we represent over 100 organization members and their online business interests," says Christian Dawson, Co-Founder of the i2Coalition.


circleid.com | 21-Jul-2020 22:39

Can 5G Compete with Cable Broadband?

One of the recurring themes used to promote 5G is that wireless broadband is going to become a serious competitor to wireline broadband. There are two primary types of broadband competition — competition by price or performance. Cable companies have largely won the broadband battle in cities and suburbs, and I've been thinking about the competition that cable companies might see from 5G.

Cable broadband is an interesting product. In most cities and suburbs today, the basic broadband product has a download speed between 100 Mbps to 200 Mbps with upload speeds in the range of 10 Mbps to 15 Mbps. The cable companies decided over a decade ago that they were going to stay in front of market demand and have periodically increased speeds, with the most recent speed increases introduced around two years ago. Cable systems can offer speeds up to a gigabit, but the ugly secret that cable companies don't want to talk about is that it would be incredibly expensive if too many people bought and used gigabit speeds. CCG does market surveys, and the primary complaints that customers have about urban cable broadband is inconsistency — networks have periodic slowdowns and outages that customers find frustrating. As much as one-third of cable customers also poll as hating the larger cable companies' customer service.

The biggest weakness of cable broadband is the upload speed. This wasn't an issue for most homes until the recent pandemic sent students and parents home. Many homes that were satisfied with cable broadband have found that the upload streams are inadequate to allow multiple people in a home to connect to servers and video conferencing services. Cable companies can probably tweak upload speeds upward by 50% more, but that will still feel slow to many homes. Cable companies are faced with an expensive upload to DOCSIS 4.0 to create symmetrical speeds.

There are two products being marketed as 5G. The first is Verizon's fixed wireless access product. This is not 5G and is best described as fiber-to-the-curb because it requires a fiber network built close to homes to provide this product. This is a fiber technology that happens to use a wireless drop. As such, it is technologically superior to cable broadband in that speeds can be symmetrical. Verizon says speeds can be as fast as a gigabit, but speeds will vary by customer and will likely slow down during heavy rain or get slower in summer when shrubs and trees are in full leaf. From a price perspective, Verizon is using this product to reduce cellular churn and is pricing it at $50 for a Verizon wireless customer and $70 for everybody else. The $70 price will not push Comcast and Charter to lower prices, but it might force them to hesitate with future rate increases for neighborhoods competing with the Verizon product.

For years, the FCC and the industry have implied that 5G cellular will be a competitor for landline broadband. I still can't see many homes accepting 5G cellular as a replacement for landline broadband. I can think of a number of important ways to compare and contrast the two broadband products:

Speed. Forget the millimeter-wave product that cellular companies are touting as delivering cellular speeds over a gigabit. It's a gimmick product used to try to promote the idea that 5G is fast. The millimeter-wave technology is only good outdoors, and even then only travels a few hundred feet from a cell site. It delivers gigabit speeds to cellphones — when cellphones aren't designed to run multiple apps that require fast broadband. The 5G download speeds on regular cellphones should creep up 100 Mbps over the next 5 to 7 years, and would rival the base speeds on cable company networks — but by that time the cable companies are likely to upgrade all of their customers to 250 Mbps. Cellular upload speeds don't matter, because no family is going to conduct multiple upload sessions over a single cellphone.

Overall Capacity. Cellular networks today carry less than 5% of all US broadband. Even the majority of data passed through cellphones is handed off to landline networks through WiFi. In North America this year, Cisco predicts that in 2020 there will be 77 exabytes per month carried by landline networks compared to 3.4 exabytes carried by cellular networks. By 2022 that will grow to 109 exabytes for landline networks and 6 exabytes for cellular networks — the gap between the two technologies is rapidly widening. There is no scenario where cellular networks can somehow steal away a lot of the traffic carried by landlines. When cellular companies make this claim, they are arguing against the realities of physics.

Household Usage. Household usage of broadband has exploded. In the first quarter of 2018, the average US home used 215 gigabytes of data per month. At the end of the recent first quarter of 2020, that had grown to over 400 gigabytes per month. By 2024 the average home might be using more than 700 gigabytes per month.

Data Caps. The above statistics show the absurdity of the claim that cellular will somehow overtake landline broadband. Even the 'unlimited' cellular data plans today are capped or heavily throttled after 20 or so gigabytes of data used in a month. Cellular companies are not likely to raise the data caps much because they don't want heavy data users sucking all of the cellular networks' capacity.

Pricing. US cellular data is the most expensive broadband in developed countries. For 5G to compete with landline broadband, the cellular companies would have to kill the paradigm of selling an extra gigabyte of data for $10. 5G can only compete with landline broadband if the cellular carriers can increase wireless network capacity by a factor of ten and are willing to lower prices by more than a factor of ten. The first is not possible due to the limitations of physics and there are no indications that cellular carriers are willing to consider the second.

Written by Doug Dawson, President at CCG Consulting


circleid.com | 21-Jul-2020 22:24

DNS Records Lookup of "Walmart Drive-In Movie Theater" Domains Indicates Likely Typosquatting

People may not yet be keen on going to movie theaters due to COVID-19. As such, drive-in movie theaters have become more prominent as these help implement social distancing measures. In line with this, Walmart has announced that it is transforming 160 store parking lots into drive-in movie theaters.

Although the project won't roll out until August, interest has peaked, with search terms such as "Walmart drive in locations" and "Walmart drive in movie locations" breaking out on Google. Specific searches for Walmart's drive-in website (walmartdrive-in[.]com) and walmart drive-in[.]com) also increased by 5000%.

Domainers and possible threat actors seemingly started to take advantage of the trend by registering lookalike domain names. We took a closer look, notably through a DNS records lookup.

DNS Records Lookup of Walmart Drive-In Domain Lookalikes

We detected eight domain names inspired by Walmart's drive-in theater announcement. They appeared on the Domain Name System (DNS) on 3 July, only two days after Walmart announced its plans. These are:

  • walmardrive-in[.]com
  • walmatdrive-in[.]com
  • wamartdrive-in[.]com
  • walmartdrive-im[.]com
  • wallmartdrive-in[.]com
  • walmartdrie-in[.]com
  • walmartdriv-in[.]com
  • walmartdrive-on[.]com

The eight potential typosquatting domains were bulk-registered and had the same WHOIS record. Note, though, that two of them had a different registrar. The registrant names and organizations of all domain names have been redacted for privacy, but their address "Chengdu, Sichuan, China" remained.

To get a better view of the domain names' infrastructure, we used a DNS records lookup to see their IP address, nameserver, and mail server. We found that all of them use the same IP address and mail server. They also use either ns2[.]above[.]com or ns1[.]above[.]com as a nameserver.

The typosquatting domains' infrastructure indicates that they use shared services. Running the nameservers on Reverse NS yielded hundreds of associated domain names.

The same held when we ran the mail server address park-mx[.]above[.]com on Reverse MX.

Why Are These Domain Lookalikes Suspicious?

The potential typosquatting domains could have been registered for investment or malicious purposes (unless Walmart registered them). But based on their WHOIS records and DNS infrastructure, we find these typosquatting domain names suspicious for three reasons:

They are associated with a malicious IP address.

Hundreds of domain names share the domain names' IP address 103[.]224[.]182[.]242. And it's important to note that this IP address was tagged "malicious" on VirusTotal. While this could mean that some of the associated domain names figured in nefarious activities in the past, the fact that the typosquatting domains we are investigating are connected to a suspicious IP address should already be a red flag.

Their WHOIS data differs from that of the official Walmart drive-in domain.

While the registrant name and organization of walmart[.]com was not disclosed, its WHOIS records indicate an address in Bentonville, Arizona, where Walmart's headquarters is located. Furthermore, the official domain name is associated with private nameservers based on this DNS records lookup's results.

Walmart is more likely to host a web page on its official website for its new drive-in movie theaters.

Lastly, Walmart provides several services — money transfer and bill payment, healthcare, gift registry, auto care, and auto buying services. And all of these are hosted within walmart[.]com. As such, the company is not likely to create a whole new website for its drive-in movie theaters.

Threat actors always search for newsworthy events that they can capitalize on. The growing interest surrounding Walmart's drive-in movie theaters tells them that investing in domain name lookalikes could be lucrative.

A company as big as Walmart has a lot of things on its plate. Aside from staying afloat amid the ensuing pandemic, it should also keep pace with threat actors. Educating their consumers about the dangers of visiting typosquatting domains is one solution. Another could be enlisting the help of typosquatting protection and DNS records lookup tools so its infosec team can take immediate action.


circleid.com | 21-Jul-2020 03:46

Confusingly Similar But No Likelihood of Confusion in UDRP

The word "confusion" in the Uniform Domain Name Dispute Resolution Policy (UDRP) signifies two separate states of mind. The first in ¶4(a)(i) appears in the phrase "identical or confusingly similar to a trademark or service mark in which the complainant has rights." It is a test to determine whether the mark owner has standing to maintain a UDRP proceeding. See The Perfect Potion v. Domain Administrator, D2004-0743 (WIPO November 6, 2004) ("The intent behind [the first requirement] is to ensure that the Complainant has a bona fide basis for the Complaint.")

The second use of confusion appears in ¶4(b)(iv) in the phrase "likelihood of confusion." This is a test to determine whether the registration and use of the challenged domain name amount to cybersquatting. A word of caution here, though: the second phrase is not to be confused with the same phrase used in trademark infringement jurisprudence. Rather, "likelihood of confusion" in the UDRP context is to be understood as an answer to the following question: "is it likely that a consumer will be confused into believing that there is an association of the domain name with the mark owner?" The standard for making that determination in a UDRP proceeding is significantly less demanding and made applying different factors than in a trademark infringement case. See Smoky Mountain Knife Works v. Carpenter, AF-230ab (eResolution July 3, 2000) (holding "Respondent's use of the Contested Domain Names appears to satisfy even the more stringent test of likelihood of confusion.")

This understanding — and sometimes misapplication of the standard and factors — is illustrated in Truworths Ltd v. saichao dong, D2020-1189 (WIPO June 25, 2020). The Panel concluded that while there was confusing similarity, there was no likelihood of confusion between and TRUWORTHS and dismissed the complaint. Setting aside the obvious question of typosquatting, which is an issue for the second and third requirements, the domain name is also identical to the mark, not in a side-to-side comparison, of course, but aurally as homonyms: that is when spoken the two are identical in sound. In any event, Complainant succeeds on the first requirement without controversy.

For the balance of the decision, though, the Panel steps into the error (although not in so many words) of applying the heightened trademark infringement standard. In a recent private comment I received about this case, the commentator noted: "[I] [d]on't know if I have ever seen a UDRP decision that spends so much time dealing with what it does not know in order to deny a complaint." This hits the nail dead-on, but in commending the decision, the commentator also got it wrong: "I would wager to bet" said the commentator, "that most panelists would have readily found for the complainant based upon a list of presumptions that the panelist here calls into question." My reading of the decision is not so approving. Frankly, the Panel's methodology in paying more attention to the unknown than the known is astonishing.

Pondering on facts unknown and drawing inferences from what the unknown may reveal if they were known can never be a substitute for knowing. Where triers of fact fail to concentrate on what is known and draw inferences from what is not known, they fall into error. An indisputable fact comes into being when a contention is supported by evidence. "Presumptions," on the other hand, are no more than speculations of facts. This is why contentions that are mere "presumptions" are not a formula for success in litigation or proceedings under the UDRP.

It is a fundamental law of reasoning that to treat established facts as presumptions compounds error by elevating those inferences above facts. This is precisely why the reasoning process in Truworths is skewed. Complainant submitted evidence, but instead of giving it weight, the Panel preferred to ponder on what it did not know, and from what it did not know, inferred what it might be.

It says that it "seems" to him,

that the Respondent is using the Disputed Domain Name for the offering of gambling services which has nothing to do with the Complainant or the Complainant's trademark and the fact that the Disputed Domain Name is in substance a typographical variant of the Complainant's trademark is merely coincidental. The Complainant says the Respondent's activities are illegal in China and illegal activities cannot confer a legitimate interest. Whilst the Respondent's website is clearly written in Chinese logograms the Panel does not know whether it is based in or targeted specifically at China as opposed to readers of Chinese wherever they may be.

From these non-facts the Panel then infers that whatever confusion may exist "is merely concidental." The error here is that Respondent did not appear to rebut the evidence, but the Panel concludes that because the domain name resolves to a website having no connection to Complainant's goods that it must be exonerated from having intentionally chosen a domain name that just happens to mimic Complainant's.

Whether Chinese gamblers are in China or expatriates is irrelevant to the question of whether Respondent intentionally registered the domain name for its trademark value. What are the Panel's errors? First, it has misconceived the principle that informs "likelihood of confusion" by suggesting that domain names resolving to websites that "ha[ve] nothing to do with the Complainant" are non-infringing. This turns UDRP law upside down. It is true that where a respondent appears and rebuts Complainant's contentions, it can succeed, but not because its motivation is unknown, but because Respondent has explained its motivation in certified testimonial and documentary evidence. There is a second error, namely giving greater weight to inferences than to the facts of record.

There is a menu of indisputable facts in Truworths; indisputable because supported by documentary evidence. One fact is that as soon as Respondent learned of a possible challenge to the domain name, it cyber-flew to another registrar in the U.S. registering with a nonexisting address. The original registrar of "suspended this domain because of betting" (email response to Complainant from the registrar). It turns out that betting websites are illegal in China. (Respondent has 2,200 domain names pointing to betting websites!) Based on this illegality, Complainant asked a sensible question (reframed in my words): "Can the use of a domain name for illegal activities ever confer a legitimate interest?"

We know from numerous decisions that in certain circumstances — fraudulent and criminal conduct, for instance — Complainant's prima facie case that Respondent lacks rights or legitimate interests must succeed, unless Respondent rebuts the contentions by coming forward (the burden shifts to Respondent) with evidence that it does have rights or legitimate interests. This is probably true also of illegal activities. There was, as I have already noted, no rebuttal in this case since Respondent defaulted in appearance, yet the Panel nevertheless decided to put aside the issue of whether Respondent has a right or legitimate interest to explore the issue of abusive registration, and found, lo and behold! Which it had already predetermined that not only does Complainant fail on its prima facie case, it also fails to prove abusive registration.

The Panel's explanation for accepting its inferences over undisputed facts is astonishingly misconceived. It says that

Taking the evidence as a whole the Panel is not satisfied that it establishes the Disputed Domain Name was chosen because of its similarity to the Complainant's trademark. It seems to the Panel more likely that it was chosen as result of whatever methodology the Respondent uses to select domain names, and its similarity to the Complainant's trademark is entirely coincidental. If that is the case there is no basis for a finding of bad faith unless further factors suggest otherwise.

The Panel's fall back on Respondent's "[unknown] methodology [in] select[ing] domain names" can not be explained. Whatever the "methodology" may be (completely speculative as even the Panel admits), there is still a likelihood of confusion, so that to conclude that was "more likely . . . entirely coincidental" makes no sense in light of the evidentiary facts which contradict it. Whatever the composition of the other 2,199 domain names Respondent holds, the evidence establishes that was chosen because Respondent knew Complainant's mark was known to Chinese consumers. The choice of is not "coincidental" but intended to attract consumers to Respondent's website because they recognize Complainant's mark.

This is one of those decisions that should be vacated, and if ever challenged in a court of law; perhaps, even, in an in rem proceeding under the ACPA, it will be.

Written by Gerald M. Levine, Intellectual Property, Arbitrator/Mediator at Levine Samuel LLP


circleid.com | 21-Jul-2020 00:34

Hot Take on the Twitter Hack

If you read this blog, you've probably heard by now about the massive Twitter hack. Briefly, many high-profile accounts were taken over and used to tweet scam requests to send Bitcoins to a particular wallet, with the promise of double your money back. Because some of the parties hit are sophisticated and security-aware, it seems unlikely that the attack was a straightforward one directly on these accounts. Speculation is that a Twitter administrative account was compromised and that this was used to do the damage.

The notion is plausible. In fact, that's exactly what happened in 2009. The result was a consent decree with the Federal Trade Commission. If that's what has happened again, I'm sure that the FTC will investigate.

Again, though, at this point I do not know what happened. As I've written, it's important that the community learn exactly what happened. Twitter is a sophisticated company; was the attack good enough to evade their defenses? Or did they simply drop their guard?

Jack Dorsey, the CEO of Twitter, tweeted that they would share "everything we can."

Tough day for us at Twitter. We all feel terrible this happened.

We’re diagnosing and will share everything we can when we have a more complete understanding of exactly what happened.

?? to our teammates working hard to make this right.

— jack (@jack) July 16, 2020

With all due respect, that doesn't sound good enough. Other than minor details that would be useful as evidence, the security community really needs to know what went wrong — we can't build proper defenses without that. (For that matter, I've even called for disclosure of near misses.)

Twitter has become a crucial piece of the communications infrastructure; it's even used for things like tornado alerts.

But even if it weren't used for critical activities, it's a major site — and the public deserves details on what went wrong.

Written by Steven Bellovin, Professor of Computer Science at Columbia University


circleid.com | 20-Jul-2020 22:27

IGF-USA Teaser. Laura DeNardis: The Internet in Everything

The Internet in Everything
Freedom and Security in a World with No Off Switch
Laura DeNardis (Yale University Press)

Dr. Laura DeNardis, Professor and Interim Dean of the School of Communication at American University and a Faculty Director of the Internet Governance Lab, is a featured panelist at this week's IGF-USA conference.1 In advance of the event, I would like to draw attention to her sixth book: The Internet in Everything. Freedom and Security in a World with no Off Switch2 This treatise is one of those "should/must-reads" that come along from time-to-time as it focuses on a critical issue that is overlooked by either design or neglect: how digital infrastructure determines policy. The book is a provocation both to "see" digital infrastructure as it is and to understand and reimagine the politics embedded within it.

The Internet is no longer a communication system that connects people and information. It has become the Internet of Things (IoT), where cyber physical-systems, with the help of Artificial Intelligence (AI), connect people and the multitude of devices in their homes, public, spaces, and workplaces. In the process, the boundaries between the material and virtual worlds become blurred and often invisible, and in many cases, humans have become a "Thing" on the Internet of Things. Machines that communicate with each other already represent the majority of Internet "users" that employ a significant proportion of available digital capacity, making them important stakeholders that factor in cyberspace governance. This transformation has even more significance than the transition from an industrial society to a digital information society.

Dr. DeNardis lays down the facts and figures and makes connections to provoke us to think harder and deeper about the role digital infrastructure plays in our lives. Most importantly, she challenges the reader to think about how we can govern a system as it appears to be governing us. She observes that "The most consequential global policy concerns of the present era are arising in debates over the architecture and governance of cyber-physical systems. Technology policy has to be conceptualized to account for the expansion of digital technologies from Communication and information exchange to material sensing and control. How technical, legal, and institutional structures evolve will have sweeping implications for civil liberties and Innovation for a generation."

After laying out the landscape of the cyberspace of physical systems, she tackles the global politics surrounding them, including privacy, security, and interoperability issues. She then moves on to rethinking Internet freedom and governance and challenges various conceptions of what Internet freedom means while offering thoughts on the future of Internet Governance that go far beyond current debates.

This book is a skillful provocation to see the Internet from the perspective of its infrastructure. Dr. DeNardis successfully brings together what is often seen as separate: technologies, values and human rights. Anybody who is seriously thinking about Internet Governance and the future of humanity should read this book. I am looking forward to hearing Dr. DeNardis speak at the IGF-USA and expect her insights to provoke many to think anew.

  1. For details of the session see: https://www.igfusa.us/what-does-the-covid-crisis-mean-for-internet-governance/
  2. 2020, Yale University Press, ISBN-978-0-300-23307-0 

Written by Klaus Stoll, Digital Citizen


circleid.com | 20-Jul-2020 18:58

June 2020 Dot Brand Insights Report: How .SHARP Surged to the Top of the Alexa Rankings

In our June 2020 Dot Brand Insights Report, we look into the increase in .BRAND domains being used by brand owners to communicate important messaging to their customers, investors, and employees in light of the COVID-19 global pandemic. And we share a story about how JP.SHARP surged to the top of the Alexa .BRAND chart for domain names when its subdomain was used for Japanese residents to obtain surgical masks.

We've seen an increase in domains under management that's led by the 17% increase from .BRANDs within the automotive, tires, and other vehicles sector. Despite this, only 10% of .BRAND domain names have been set up with email security tools such as domain-based message authentication, reporting, and conformance (DMARC), domainkeys identified mail (DKIM), and sender policy framework (SPF) records. The lack of email security among .BRAND and legacy domain names feeds the problem of fear-based phishing attacks, which are especially effective in the time of quarantine.

We also continue to report on more Alexa statistics, including the distinct uptick in the activity surrounding .BRAND domain names like .GOOGLE and .CANON. The number of .BRAND sites from the finance and money industries have also risen sharply, as more turn to .BRANDs for business and personal banking.

In our In Focus segment, learn how .BRANDs can be used to manage third-party distribution, for the purpose of assessing ongoing behavior, performance, and risk, and achieve consistency in their website framework and content structure.

Download the full report now.


circleid.com | 17-Jul-2020 22:18

Beware of Abandoned Domain Names in this Turbulent Time and as the Global Economy Changes

The outbreak of COVID-19 has caused worldwide disruption — for whole nations and their economies. Unfortunately, there will be some side effects for businesses.

  • A number of brands will disappear from the streets and shelves, as businesses that fail to weather the storm will have to fold.
  • Companies that do survive will likely focus more on their core markets, pulling brands out of higher risk, less profitable markets.
  • As vulnerable businesses look to stay afloat, and stable brands look for a bargain, there'll likely be an increase in mergers and acquisitions.

It's with this retraction or convergence of brands where cyber criminals will take advantage. An unfortunate truth is that, whenever disaster hits, cyber criminals are ready to capitalize on the emerging crisis to make fast money, and COVID-19 is no exception.

There is much evidence suggesting an increase in cyberattacks during the COVID-19 pandemic — and the method of particular concern for folding, contracting, or merging brands is that of abandoned domain names.

The reason for this is that abandoned corporate domain names carry a footprint of digital activity that can be leveraged as an attack vector. The domain name, together with its domain name system (DNS), are the foundation of any business and brand, enabling websites, email, virtual private network (VPN) access, and possibly even voice-over IP. Herein lies the risk.

According to a recent article published by CSO Online, researchers attempted to understand the impact of letting an old domain expire by re-registering merged or acquired companies' expired domains and setting up email servers. Soon after doing so, the researchers began receiving an influx of emails, including confidential information like bank correspondences, invoices, sensitive legal documents, and LinkedIn® updates.

This shows that, without actually hacking into a company's systems, a re-registered domain name not only gives the new registrant instant access to emails, but also the ability to reset passwords to accounts — including management or financial portals, databases, and social media. This can expose a business to phishing attacks, data leaks, social engineering, and more.

It's also possible to reinstate an old web shop to take new orders and payments without actually fulfilling them, and take over email marketing accounts to conduct phishing campaigns. Many users reuse old passwords, and just one compromised account can lead to further breaches on other accounts.

So what's the solution for brands in a state of change following COVID-19? How do you protect the assets of a brand axed due to budget cuts, or those of a company just acquired? Companies face a dilemma — do they retain and renew every single domain name just to be safe, or downsize their portfolio at a time when budgets are tight?

The first option of retaining or renewing every domain may seem like the safest option, but doesn't help you fulfill the directive to reduce your budget. CSC's holistic, four-step digital optimization framework is designed to review a client's digital assets, including auditing (so you know what you own), and rationalizing the domain name portfolio for better management and return on investment.

Undertaking digital optimization alone is a challenge with which many companies struggle. In the case of a merger or acquisition, this challenge is compounded when different departments take over existing accounts, or employees leave the company, taking their knowledge (and passwords) with them. When a company isn't aware of the full extent of its digital footprint, it risks abandoning the domains that matter, and therefore increases the risk of cyberattacks.

CSC's methodology makes the whole process easier, and enables us to overcome one of the biggest challenges: identifying the most vital domains a company owns. CSC Security Center® — our proprietary tool based on advanced algorithms — helps identify the most vital domains, removing the guesswork from the process, and ensures that critical domains and those with a digital footprint are never abandoned.

Our digital optimization approach looks at a client's trademark rights, the markets in which they operate, and even goes as far as to consider the ability to recover domains from third parties.

  1. This article originally published on Digital Brand Insider.

Written by Ken Linscott, Product Director, Domains and Security at CSC


circleid.com | 17-Jul-2020 21:50

What a WHOIS Registrant Lookup Can Tell about "Kanye West" Newly Registered Domains

Kanye West trended after he announced his plan to run for U.S. president on 4 July 2020. On Twitter, his announcement was liked over 1.1 million times and retweeted more than 500,000 times. Elon Musk was also quick to express his support.

On 5 July 2020, a day after the announcement, our typosquatting detection capabilities picked up nine Kanye West domain names:

  • kanyeowest2020[.]com
  • kanyewest2020[.]today
  • kanyewest2020[.]ventures
  • kanyewest2020[.]gallery
  • kanyewest2020[.]vision
  • kanye2020[.]store
  • kanye2020[.]run
  • kany2020[.]com
  • kanye2020[.]vote

A WHOIS registrant lookup of these newly registered domains raises questions about domain ownership and the possible reasons for these registrations. Let's take a closer look.

WHOIS Registrant Lookup of Kanye West Domain Names

Kanye West has an official website, kanyewest[.]com, where people can find his clothing merchandise and some of his videos. According to a WHOIS registrant lookup, the domain name is owned by Universal Music Group under the registrant organization "Island Def Jam," which is based in New York. The email address mentioned on record — hostmaster@umusic[.]com — belongs to Universal Music as well.

The Kanye West domain names detected, on the other hand, do not match the details present in the official domain's WHOIS record. Here are the general findings on the lookalike domains:

  • Registrant name and organization: All domain name records except for that of kanye2020[.]store have either been redacted or left blank. The domain kanye2020[.]store was registered under the registrant organization, Callum Phillips.
  • Registrant address: While registrant addresses could reflect that of the domains' privacy protection company, it is still important to note that four domains have U.S. addresses, two were based in Panama, while the others were U.K.-, Australia-, and Canada-based.

  • Registrar: The registrar of most of the domains were either GoDaddy or NameCheap, while one was Google Inc.

Digging Deeper Using Domain Intelligence

Aside from these Kanye West domains, we also saw some Yeezy-related domain names on the same day that the lookalike domains were detected:

  • freindsofyeezy[.]vote
  • freindsofyeezy[.]com
  • freindsofyeezy[.]support

Yeezy is Kanye West's clothing line. The official site kanyewest[.]com contains a link to the domain yeezysupply[.]com.

We also wanted to see what other domain names belong to the registrant organization Callum Phillips, so we ran a reverse WHOIS search. Aside from the kanye2020[.]store, the organization also owns the domain yeezy2020[.]store. Both domain names appeared to be parked at the time of writing.

What Could the Goals of These Domain Registrations Be?

While many of these "Kanye West" domains may have been speculatively registered as part of an investment strategy, some could be weaponized and used in phishing and malware attacks or financial scams. That's unless Kanye West or someone in his team registered them for commercial purposes, of course.

Nevertheless, domainers and threat actors are known to quickly react to headlines. Since the beginning of June, for instance, there have been hundreds of election-related domain names detected in the Domain Name System (DNS). As the U.S. election nears, we are bound to see more.

Registrants of the Kanye West domain names could be taking advantage of the millions of searches for Kanye West and his political plans. The image below is from Google Trends, which shows that there were over 2 million searches for Kanye West on 4 July 2020.

It's also possible that the Kanye West domain names could be used to trick supporters into giving monetary donations or purchasing pirated merchandise, for example. Furthermore, these lookalike domains could figure in phishing and malware campaigns, which would cause far more damage.

Whether or not Kanye West will be running for U.S. president is irrelevant when it comes to cybersecurity. People should be wary of any proven typosquatting domain names either way. WHOIS registrant lookup queries can also reveal more about identities and inconsistencies between legitimate and potentially suspicious domains names.


circleid.com | 17-Jul-2020 20:54

2008 vs 2020: Analyzing Domain Names in a Global Recession

In the past few months, the definition of normal has changed for institutions, individuals, and industries. When the future seems blurry, at Radix, we go back to data and insights from the past to get some perspective.

In our exclusive Radix Speak series, we are bringing to you in-depth ideas on how the domain industry and everything associated with it will evolve in the coming months so that we all can be better prepared for it.

To start off this series, we have Bala GR, Head of Business Intelligence at Radix, study and analyze data from the domain industry since 2008 to gauge how 2020 might shape for us.

2008 vs 2020: Analyzing domains in a global recession from Radix on Vimeo

A lot has already been spoken about the strange turn of events that have kept the global economy on the hook this year. Sure, we expected a recession; it was a long time due. What we didn't expect was a pandemic and a worldwide self-quarantine for months on end. A few years from now, we will be looking back at 2020 with awe and absurdity in equal measure. 

Speaking of looking back, this year opens up the perfect opportunity to examine the 2008 recession. After all, that's our only benchmark in these uncertain times. While a decade has since passed, there are trends that could help us make a fair prediction of what lies ahead of us this time around. 

Making sense of the past 

For the new domain name industry, this is the first-ever experience of an economic downturn. As far as we and our new domain name registry friends are concerned, looking at the legacy domain trends immediately before and after 2008 can be quite suggestive at this point. 

So, that's exactly what we did. We looked at whatever relevant data we could find for years before and after 2008. Was it useful? To a large extent, yes. 

Registration and Renewals Data for .COM, .NET, .ORG, .INFO and .BIZ from 2006–2010

Assumptions and Data Details: 

  • Legacy gTLDs include .com, .net, .org, .info and .biz
  • All Data taken from ICANN Monthly Registry Reports.

    New Registration data for .COM & .NET in ICANN Reports start from April 2007, hence data from 2003-2006 is estimated based on Blended Retention Ratios and Q1 2007 is based on Quarterly Ratios

    New Registration data for .ORG & .BIZ in ICANN Reports start from January 2007, hence data from 2003-2006 is estimated based on Blended Retention Ratios

As detectable in the table above, from 2004 up to 2008, the DUMs were increasing by at least 20% YoY. However, in 2008, the YoY growth slowed down to 12%~, and further down to 6%~ YoY in 2009. 

New Registration Trend, 2007–2009

What's more, the new registrations data for legacy gTLDs indicates that 2008 saw a 4% decline over 2007 and recovered by only 0.69%~ in 2009 over the previous year. This was a steep departure from the healthy new registrations trend over the previous years. What's interesting is that the new registrations failed to recover to a steady growth even until 2012. 

Of course, for our industry, profitability rides on the rate of renewals. It's no hard guess that with the number of new registrations on the decline in 2008, the gross renewals also saw an impact wherein their growth was only ~6% in 2009.

To get a clearer image of the renewals trend, we have made certain assumptions. For instance, we have estimated the overall retention % of DUMs based on the previous year and then assumed 85% renewal rate for the second and subsequent renewal.

We have then estimated the new registration retention % for each year, thereby arriving at an estimated first-time renewal rate.

New Registration Trend, 2007–2009

Assumptions and Data Details: 

  • Numbers mentioned above are for .com, .net, .org, .info and .biz domains
  • All Data taken from ICANN Monthly Registry Reports.
  • 2nd and Subsequent Retention rate assumed at 85%
  • Domain Tasting rates and patterns have been excluded from these assumptions

And as can be seen here as well, the first-year renewals dipped to 45% in 2009, from a much higher average of ~58% in the previous years. This could be attributed to the overall market sentiment during that period, the quality of the names, and the general pressure on businesses.

What does it mean for us? Three words: Proceed with caution

Based on publicly available data, the domain industry, on the whole, has reported a surge in domain name registrations since February 2020. Legacy gTLDs' new registrations have seen a 5%~ jump in March 2020 and 10%~ jump in April 2020 when compared to the average monthly registrations for 2019. (Refer table below)   

Growth in Registrations of .COM, .NET, .ORG, INFO and .BIZ

This is great news for our industry. In fact, we have seen a rise of 15-20% in registration volumes at Radix too. This is not limited to standard domain registrations; we saw a 22% increase in premium domain registrations as well, and a 15% increase in premium domain revenue in March-May vs. the previous six months. 

While .online saw a 45% increase in premium registrations and a 38% increase in revenue, .store saw a 70% increase in premium registrations and a 93% increase in revenue. This is clearly an indication of the urgency across industries to go online with meaningful, brandable names.

It's too early to celebrate, though. While we expect these registrations to be backed by meaningful usage, we have to take into consideration that this surge could be a result of knee-jerk, reactive reasons. So, obviously, we ought to be cautious when predicting the renewal rates for the next year.

However… and it's a big however

We cannot ignore the fact that times are vastly different now when compared to 2008. The society is more dependent on the Internet today than ever before.  And if that wasn't enough, the pandemic has further accelerated the transition from offline to online for businesses across the board.

Global Internet Users Over the Years (Source: ITU's Measuring Digital Development)

This is a unique situation for our industry. As more and more businesses, institutions, and even next-door services are going online and speedily evolving their technology usage, our industry stands to play a crucial role in this mix.

So, our guess is as good as yours when it comes to expecting the future. At Radix, we live and breathe data! And yet, at an interesting juncture like this, there isn't any data that we can rely on 100%. For what it's worth, we have looked at the past. As for the future, it seems to be hanging on that big "however" we spoke of above.

  1. This article was originally published on Domain Name Wire.


circleid.com | 14-Jul-2020 23:13

UK Bans Huawei 5G Equipment, Also Orders 5G Kit to Be Removed From UK Networks by 2027

All mobile providers in the UK will be banned from buying new Huawei 5G equipment after 31 December and ordered to remove all the Chinese firm's 5G kit from their networks by 2027. The move follows the U.S. sanctions claiming Huawei poses a national security threat that the Chinese firm denies. UK Digital Secretary Oliver Dowden says the move will delay the UK's 5G rollout by a year, and the cumulative cost could reach £2B. "This has not been an easy decision, but it is the right one for the UK telecoms networks, for our national security and our economy, both now and indeed in the long run," Dowden told the House of Commons today. Huawei has called the action "Bad news for anyone in the UK with a mobile phone" and has threatened to "move Britain into the digital slow lane, push up bills and deepen the digital divide." More on this by BBC's Leo Kelion.


circleid.com | 14-Jul-2020 21:39

DNS: An Essential Component of Cloud Computing

The evolution of the internet is anchored in the phenomenon of new technologies replacing their older counterparts. But technology evolution can be just as much about building upon what is already in place, as it is about tearing down past innovations. Indeed, the emergence of cloud computing has been powered by extending an unlikely underlying component: the more than 30-year-old global Domain Name System (DNS).

The DNS has offered a level of utility and resiliency that has been virtually unmatched in its 30-plus years of existence. Not only is this resiliency important for the internet as a whole, it is particularly important for cloud computing. In addition to the DNS's resiliency, cloud computing relies heavily on DNS capabilities such as naming schemes and lookup mechanisms for its flexibility, usability and functionality.

Historical Perspective

As far back as the original ARPANET (the precursor of the internet), communication endpoints have had both names and addresses. The association between the two was originally recorded in the classic HOSTS.TXT file, where a master copy was kept in a centralized server and copies were distributed to endpoints. Making updates to the ARPANET's HOSTS.TXT was originally a fully centralized function coordinated by Stanford Research Institute; while the actual use of the file to lookup names and map them to network addresses was a local function, implemented within the endpoint.

That started to change in the 1980s, when Paul Mockapetris proposed the DNS, replacing HOSTS.TXT with a hierarchical naming system, with management of different parts of the overall namespace "delegated" to different endpoints. As Mockapetris noted in his design principles, "The sheer size of the database and frequency of updates suggest that it must be maintained in a distributed manner, with local caching to improve performance."

Emergence of Name Servers

Due to the delegation of authority, an organization could update the name-address mappings for endpoints within the organization locally without having to change the master file. But the trade-off was that lookups could no longer be implemented solely as a local function since no one had a copy of the entire file anymore.

Instead, the sending endpoint would need to interact with an "authoritative name server" for the receiving organization to get the network addresses for endpoints within that organization. And to identify the receiving organization's authoritative name server, the sending endpoint would also need to interact with other name servers — starting with the newly defined DNS "root" servers.

Specialized computers were set up within organizations (or by their internet service providers) to perform the (now iterative) resolution process of navigating from the root to top-level domains (TLDs) and onward to the endpoint's authoritative name server. Endpoints within the organization would then "outsource" their lookup process to the specialized servers, which were designated as recursive name servers.

Relationship Between the DNS and Cloud Computing

Like other parts of an organization's IT infrastructure, recursive name servers can be moved from on-premise deployments into the cloud. In fact, some organizations now contract with network operators and other service providers for their recursive DNS services; this external outsourcing of DNS resolution is like an external cloud deployment, providing "DNS resolution as a service" or "Cloud DNS." Organizations have likewise outsourced their authoritative name servers to external service providers.

But this isn't the only relationship that the DNS has to cloud computing. The DNS also plays an important role in enabling other cloud services. Indeed, resources in cloud computing platforms are generally identified with domain names and located by DNS lookups.

For example, Amazon Web Services recommends that "buckets" of objects — the basic unit of storage in its Simple Storage Service (or "S3") — be named according to DNS naming conventions. An appropriately named bucket can then be accessed via a Uniform Resource Identifier (URI), such as: "http://myawsbucket.s3.amazonaws.com/yourobject". Following standard DNS processing, a browser or application would then resolve the URI into a network address for the virtual server that hosts the bucket.

Similar naming conventions can be found in other cloud services, such as Microsoft Azure Storage (e.g., "http://mystorageaccount.blob.core.windows.net/mycontainer/myblob") and Google Cloud.

Importance of DNS-based Conventions in Cloud Computing

A DNS-based naming scheme — and consequently a DNS-based lookup mechanism — can take advantage of the capabilities built into the DNS to balance application workload across multiple servers. These capabilities are fundamental to rapid elasticity, one of the National Institute of Standards and Technology (NIST)'s five characteristics of cloud computing. In particular, a domain name can provide a "virtual name" for a resource in a cloud platform — resource instances corresponding to the name can then be deployed at many different physical locations. This gives the effect, as described by NIST, that "the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time."

The DNS "function" of mapping a fully qualified domain name to a network address — a function implemented by an authoritative name server for the cloud platform — is thus a fundamental building block for cloud computing. Indeed, in order for endpoints to interact reliably and confidently with named resources on the cloud platform, the platform's name server must be available and accurate.

This principle continues up the DNS hierarchy all the way to the root. Authoritative name servers themselves have domain names, and the mapping between a name server's names and actual server instances is also managed through the DNS, at the next level up in the hierarchy. This means that cloud services also depend on the accuracy and availability of the name servers at higher levels of the DNS hierarchy — each step in the iterative resolution process, including the TLD name servers, as well as the root servers. If the DNS behind a cloud service isn't working, you probably won't be able to reach the cloud service. In today's cloud-dependent world, that's just one more reason why an ongoing commitment to preserving the security, stability and resiliency of the DNS is critical.

Final Thoughts

The ongoing availability and accuracy of the DNS at all levels of the hierarchy, under various workloads and attacks, is essential to the functioning of cloud-computing platforms. The higher up the DNS hierarchy, the more important resiliency becomes, because a wider range of names and services depend on it. In this sense, the DNS, one of the earliest and most successful distributed systems, may be considered today an essential function as a service: something that cloud platforms and applications all depend on.

Written by Dr. Burt Kaliski Jr., Senior VP and Chief Technology Officer at Verisign


circleid.com | 14-Jul-2020 20:40

Host to IP and DNS Analysis of Dozens of Fortnite-Inspired Typosquatting Domains

Captain America arrived on Fortnite in time for the 4th of July celebration. This announcement was big news to the gaming community, with search terms such as "fortnite captain america skin" and "fortnite captain america" significantly rising in popularity on Google in the past week. The update also required hours of server downtime to make way for maintenance.

Days before the scheduled downtime and update, our typosquatting data feed detected dozens of domain names closely related to Fortnite. That is consistent with how domain registration behavior reflects newsworthy events. We take a closer look in this post, notably by running a host to IP and DNS analysis.

WHOIS Data Comparison: Fortnite-Inspired Domains versus Epic Games Domain

The Typosquatting Data Feed picked up close to 50 domain names related to Fortnite on 1 July 2020, the same day they appeared in the Domain Name System (DNS). Here are a few of them:

  • fortniteformodle[.]com
  • fortniteformobi[.]com
  • fortniteformpbile[.]com
  • fortnitefreemobile[.]com

It is interesting to note that Fortnite does not have a dedicated website. Instead, Epic Games, its creator, only hosts a web page for the game and all other games the company created.

The domain registrations could be part of Epic Games's typosquatting protection strategy. Yet the differences in the WHOIS records of Epic Games's official website and the potential typosquatting domains might indicate otherwise.

WHOIS Records of Lookalike Domains

Using a bulk WHOIS lookup, we found that each of the 50 potential typosquatting domains shares the same privacy protection service, nameserver, registrar, and address.

  • Registrar name: Tucows, Inc.
  • Nameservers: ns15[.]above[.]com and ns16[.]above[.]com
  • Creation date: 30 June 2020
  • Registrant name and organization: Contact Privacy Inc.
  • Registrant address: 96 Mowat Ave, Toronto, Canada
WHOIS Records of the Epic Games Official Domain

All of the details cited above differ from those in the WHOIS record of epicgames[.]com. Epic Games's domain was created in 1995 and registered with the organization name "Epic Games, Inc." with an address in North Carolina, U.S.

A DNS lookup revealed that the domain resolves to these IP addresses (at the time of writing):

  • 52[.]23[.]121[.]216
  • 52[.]0[.]226[.]220
  • 3[.]94[.]26[.]26
  • 52[.]87[.]65[.]189
  • 107[.]23[.]187[.]0
  • 54[.]86[.]164[.]3
  • 54[.]88[.]3[.]65
  • 52[.]200[.]193[.]112

According to IP Geolocation API, these IP addresses belonged to Amazon with Autonomous System number (ASN) 14618.

The domain also uses the nameserver "ns-1094[.]awsdns-08[.]org."

A Deeper Host to IP and DNS Analysis of the Fortnite Lookalike Domains

A DNS lookup revealed that several of the likely typosquatting domains share the IP addresses 70[.]32[.]1[.]32 and 170[.]178[.]168[.]203 (again, at the time of writing).

An IP geolocation lookup, on the other hand, indicates that 170[.]178[.]168[.]203 is a U.S.-based IP address with ASN 46844 owned by Sharktech. Among the domains associated with the IP address are what look to be adult sites, as shown in the latter half of this screenshot:

On the other hand, 70[.]32[.]1[.]32 is also a U.S.-based IP address, but GigeNET owns it with ASN 32181. It's interesting to note that the IP address is also associated with one of the adult sites. Other associated domains are shown below as well.

What's more, our threat intelligence data and VirusTotal warned that both IP addresses could carry malware. The Fortnite lookalike domains may therefore be dangerous and possibly used by threat actors to, for example, lure gamers into clicking a link to a malware-laden page.

The slew of domain names inspired by Fortnite could be an attempt to maliciously get into the gamers' network. By detecting typosquatting domains early, Epic Games and other video game creators could help protect their users from cybercrime.


circleid.com | 14-Jul-2020 06:09

Google to Invest $10 Billion in India to Help Accelerate Its Digital Economy

Sundar Pichai, CEO, Google and Alphabet, announcing a $10 billion commitment towards its Google for India Digitization Fund / JULY 13, 2020)

A new Google for India Digitization Fund, announced by Sundar Pichai, CEO of Google and Alphabet, will invest approximately $10 billion into India over the next 5-7 years. The effort aimed to accelerate India's digital economy will be carried out through a mix of equity investments, partnerships, and operational, infrastructure, and ecosystem investments. More specifically, the company says the investments will focus on four key areas essential to India's digitization:

1) Enabling affordable access and information for every Indian in their own language. 2) Building new products and services that are deeply relevant to India's unique needs. 3) Empowering businesses as they continue or embark on their digital transformation 4) Leveraging technology and AI for social good, in areas like health, education, and agriculture.

A China thing: Gaining a foothold in India has become crucial for American technology giants that have been largely shut out from doing business in China, writes TechCrunch's Manish Singh. Earlier this month, Google said it had abandoned plans to offer a new cloud service in the world's largest internet market.


circleid.com | 13-Jul-2020 23:33

MarkMonitor Releases New gTLD Quarterly Report for Q2 2020

New gTLD Quarterly Report, Q2 2020 Download Report

MarkMonitor today released its latest issue of the New gTLD Quarterly Report for the second quarter of 2020, including a particular focus on registration trends during the pandemic.

As COVID-19 makes its mark across the globe, businesses adapt to new behaviors across the domain landscape. Taking this new environment into account, MarkMonitor analysts reviewed the registration counts of selected new gTLDs towards the beginning of the pandemic (February 19) and then again 90 days later (May 19). In doing so, the team found some interesting results.

As stay-home orders limited individuals' ability to go out to public places and engage in hospitality activities like visiting restaurants, MarkMonitor analysts found a correlation in the increase in domain registrations of topically relevant TLDs. For instance, with delivery being the only mode of food service allowed in many places, it seems logical that there has been more interest in businesses or individuals buying domain names that include TLDs such as .DELIVERY or .MENU.

Registration counts of new food service gTLDs, February – May 2020 (Source: MarkMonitor New gTLDQuarterly Report, Q2 2020)TLDDomain counts 2/19/2020Domain counts 5/19/2020Count increasePercent increase.BAR38,84787,14548,298124.33%.REST29,87241,56311,69139.14%.DELIVERY5,7707,0021,23221.35%.MENU5,1545,4252715.26%.COFFEE19,45320,3619084.67%Cumulative99,096161,49662,40062.97%

Other interesting updates in the report covering the gTLD landscape include:

  • New gTLD registration trends during the COVID-19 pandemic
  • New gTLDs launched this quarter
  • .Brand domain registrations that include coronavirus and COVID-19
  • .Brand TLD news and notes, including the top .brand labels
  • ICANN and INTA meetings updates

Download the full report here.


circleid.com | 13-Jul-2020 18:01

Why You Shouldn't Believe Network Speed Tests

The media is filled with hyperbolic claims that "Our network is the fastest!"

And there are many so-called "Speed Test" tools available on the Internet. Most are easily run in a web browser.

Should you trust those tools?

Not really.

The popular speed testing tools provide a very narrow and limited measure of network "speed."

It is quite possible that a network that is rated as "fast" could actually deliver poor results to many applications.

Why is this so?

What's In Those Speed Tests?

Most speed test tools on the Internet run a limited regime of tests:

  • ICMP Echo/Reply ("ping") to measure round-trip time (although most tests are unclear whether they are reporting round-trip time or dividing by two to estimate one-way latency.)
  • HTTP GET (download) and PUT (upload) to measure TCP bandwidth.

Some more sophisticated tools may add things like:

  • Traceroute (properly done with UDP packets, improperly done using ICMP Echo packets)
  • DNS queries

Some speed test tools use IPv4, some use IPv6, some use whatever the underlying web browser and IP stack chooses.

Sounds Good, So What's Wrong With 'Em?

Network performance is highly related to the way that the devices on a network converse with one another.

For example:

  • Does the application software (and its server) use UDP or TCP?

    UDP is vulnerable to many network phenomena such as IP fragmentation, highly variable latency/jitter, packet loss, or alteration of the sequence of packets (i.e., the sender sends packets A, B, and C, the receiver gets them in the order B, C, A.), etc.

    TCP, on the other hand, although reliable, may withhold delivering data to the receiver while it internally tries to deal with packet losses, changes in end-to-end latency, and network congestion.

  • Does the application's data have real-time constraints? For example, voice or video conferencing applications have very tight time constraints else the images may break up, freeze or words be lost.
  • How big are the chunks of data being sent? Larger data, particularly very large high-definition video, is more vulnerable to loss on the network, transient congestion problems, or IP fragmentation issues than are small data packets.

The bandwidth number generated by most speed test tools is based on World-Wide-Web HTTP GET (upload) and HTTP POST (download) transactions. These are bulk transfers of large amounts of data over TCP connections.

Bandwidth numbers based on TCP bulk transfers tend to be good indicators of how long it may take to download a large web page. But those numbers can be weak indicators of performance for more interactive applications (e.g. Zoom).

Moreover, TCP tries to be a good citizen on the network by trying hard to avoid contributing to network congestion. TCP contains several algorithms that kick in when a new connection is started and when congestion is perceived. These algorithms cause the sending TCP stack to reduce its transmission rate and slowly creep back up to full speed. This means that each new TCP connection begins with a "slow start." In addition, any lost packets or changes in perceived round-trip time may send the sending TCP stack into its congestion avoidance regime during which traffic flows will be reduced.

Modern web pages tend to be filled with large numbers of subsidiary references. Each of those tends to engender a Domain Name System lookup (UDP) and a fresh TCP connection (each with its own slow start penalty.) As a consequence, modern web page performance is often not so much limited by network bandwidth but more by protocol algorithms and network round-trip times.

So What Do We Really Need?

Unfortunately, a full measure of the quality and speed of a network path includes a large number of often obtuse numbers.

  • Whether the path contains parallel elements due to load balancing or bonding of physical links. (In other words, it is good to know whether all the traffic follows the same path or whether it is divided among multiple paths with possibly quite different characteristics.)
  • Whether the network path is symmetrical or whether each direction takes a different route. (This is very common.)
  • Path MTU (Maximum Transmission Unit for the entire one-way path — a separate value is needed for each direction.)
  • End-to-end latency, and often, more importantly, a statistical measure of the packet-to-packet variation of that delay, often called "jitter."
  • Packet loss rates and a measure of whether that loss occurs continuously or in bursts. (This is particularly important on paths that include technologies subject to outside interference and noise such as wireless links.)
  • Buffering along the path (in other words, whether the path may suffer from "bufferbloat".)
  • Packet re-sequencing rates and a measure of whether that is burst behavior or continuous.
  • Whether there are "hidden" proxy devices (most likely HTTP/HTTPS or SIP proxies) that are relaying the traffic.
  • Whether there are any rate limiters or data quotas on the path.
What Can A User Do?

Users are somewhat limited in their ability to control protocols and applications.

The user can check the following things:

  • If using a wi-fi network at home or work in conjunction with Bluetooth, make sure that you are attached to the wi-fi on the 2.4Ghz band. Many user devices have only one radio. If that device is connected to wi-fi in the 5Ghz band, then that radio is being rapidly switched between Bluetooth on the 2.4Ghz band and wi-fi on the 5Ghz band. That's a recipe for generating destructive packet loss and jitter.
  • Make sure your home wi-fi and router devices have some of the new anti-bufferbloat code. See What Can I Do About Bufferbloat?
  • Be aware when you may be sharing your network resources with other users or other applications.
What Tools Do Developers Have To Make Sure Applications Behave Well Under Real-Life Conditions? Enter the Network Emulator.

Speed test tools tend to give an optimistic report of how a network behaves for a highly constrained number of applications. Similarly, many network developers test their code only under optimal laboratory conditions.

There are tools available to developers so that they can assure that their code and products are robust and behave well in the face of inevitable sub-optimal network conditions.

Most of these tools come under the heading of "network emulators." These effectively act as a bothersome man-in-the-middle, delaying some packets, tossing others, perhaps re-sequencing packets, or even duplicating them.

Network Emulators come in a variety of capabilities and accuracies:

  • Simple emulators are built into some mobile phones.
  • There are a couple of open-source packages that typically exist as kernel modules for Linux or FreeBSD. These usually must be used through an arcane command-line interface. And their accuracy can vary wildly depending on the underlying hardware.
  • There are external devices that are inserted into an Ethernet link (like one would insert an Ethernet switch.) These devices tend to have better accuracy and performance and often have web-based graphical user interfaces. IWL's KMAX is in this category.

There are also mathematical emulators. Those are more for those who are designing large networks and want to perform queueing theory analysis of how that network might perform if new links are added or removed.

Written by Karl Auerbach, Chief Technical Officer at InterWorking Labs


circleid.com | 12-Jul-2020 23:28

Bulk Domain Lookup of 3,000+ NRDs with "Deal" Word Strings Appearing Days before July 4

The U.S. Independence Day comes with both fireworks and the best deals. On this holiday, retailers usually offer big discounts. At this time when people may opt to shop online, several publications like TechRadar and Business Insider even curated a list of 4th of July deals from different retailers.

Several days before the celebration, however, we detected thousands of newly registered domains (NRDs) containing the word "deal." While this might be coincidental, we decided to take a closer look at these registrations aided by bulk domain lookup, DNS lookup, and IP geolocation tools.

What a Bulk Domain Lookup Can Tell Us About the "Deal" Registered Names

On 1 July 2020, the Typosquatting Data Feed detected a total of 3,224 domain names that contain the word "deal." In fact, there were 3,606 domains present in the complete list of newly registered domains on 1 July whose name contain "deal". The typosquatting feed collects groups of domains registered on the same day in which each domain name is similar to the others in the same group; the 3,224 domains were thus members of the groups. Notably, 2,996 of them were members of a single such group, while the rest were registered in different batches.

Around 3,205 of these domain names used the .top generic top-level domain (gTLD). While we can't assume that these domains are automatically suspicious or dangerous, it's relevant to note that the badness index of .top is at 31.8%.

All of the domains also follow the same format — [xxx]deal[.]top — where "xxx" is a random three-letter combination, possibly indicating the registrants' intentions to own the majority of these combinations.

A few examples of the domains are shown in the image below:

Checking the Domains' WHOIS Records Using Bulk WHOIS Lookup

Aside from the apparent similarities in gTLD use and domain format, the domain names also shared some commonalities. We obtained the WHOIS records of a majority of them with the help of our bulk domain lookup tool and found the following:

  • Domain registrar: The registrar of circa 99% of the domains is Chengdu West Dimension Digital. The rest of the domains were distributed among Alibaba Cloud Computing; GMO Internet, Inc.; One[.]com; and Tucows, Inc.

    Chengdu West Dimension Digital; Alibaba Cloud Computing; and GMO Internet, Inc. are among the five registrars that constitute 95% of blacklisted domain names in the .top space.

  • Registrant organization: While the registrant names have been redacted, the registrant organization of circa 99% of the domains is Ji Ping Xie. A search for the name yielded inconclusive results, so we can't confirm if such an organization exists or if it may refer to an individual.

  • Registrant address: The registrant state of all domains that belong to Chengdu West Dimension Digital is Fu Jian in China. All other contact and address details have been redacted for privacy.
Digging Deeper: DNS Lookup and IP Geolocation

All of the details discussed above could be enough to raise a red flag among organizations with stringent cybersecurity measures. Still, some security teams and investigators may want to investigate further.

For instance, we found multiple domain names that resolve to IP addresses that belong to the same IP range with the help of DNS Lookup. The IP addresses are 69[.]30[.]210[.]3, 69[.]30[.]210[.]4, and 69[.]30[.]210[.]6. Some of the domains that resolve to these IP addresses are:

  • tijdeal[.]top
  • tngdeal[.]top
  • xyqdeal[.]top
  • pybdeal[.]top
  • clideal[.]top

IP Geolocation Lookup revealed that these IP addresses are owned by Kansas City-based WholeSale Internet, and share the same GeoNames ID 12047177. WholeSale Internet offers dedicated servers, including up to five usable IPv4 addresses, depending on the selected plan.

It's hard to draw definite conclusions about the nature of these "deal" NRDs. The timing of the registrations may have been coincidental with the U.S. Independence Day. Still, monitoring these domain names until they have proven legitimate (or potentially malicious) is certainly a relevant cybersecurity practice.


circleid.com | 12-Jul-2020 03:58

Did Broadband Deregulation Save the Internet?

Something has been bothering me for several months, and that usually manifests in a blog at some point. During the COVID-19 crisis, the FCC and big ISPs have repeatedly said that the only reason our networks weathered the increased traffic during the pandemic was due to the FCC's repeal of net neutrality and deregulation of the broadband industry. Nothing could be further from the truth.

The big increase in broadband traffic was largely a non-event for big ISPs. Networks only get under real stress during the busiest times of the day. It's during these busy hours when network performance collapses due to networks being overloaded. There was a big increase in overall Internet traffic during the pandemic, but the busy hour was barely affected. The busy hour for the Internet as a whole is mid-evenings when the greatest number of homes are watching video at the same time. Every carrier that discussed the impact of COVID-19 said that the web traffic during the evening busy-hour didn't change during the pandemic. What changed was a lot more usage during the daytime as students took school classes from home, and employees worked from home. Daytime traffic increased, but it never grew to be greater than the evening traffic. As surprising as that might seem to the average person, ISP networks were never in any danger of crashing — they just got busier than normal during the middle of the day, but not so busy as to threaten any Internet crashes. The big ISPs are crowing about weathering the storm when their networks were not in any serious peril.

It's ironic to see the big ISPs taking a victory lap about their performance during the pandemic because the pandemic shined a light on ISP failures.

  • First, the pandemic reminded America that there are tens of millions of rural homes that don't have good broadband. For years the ISPs argued that they didn't invest in rural America because they were unwilling to invest in an over-regulated environment. The big ISPs all promised they would increase investment and hire more workers if they were deregulated. That was an obvious lie, since the big ISPs like Comcast and AT&T have cut investments since the net neutrality appeal, and collectively the big ISPs have laid off nearly 100,000 workers since then. The fact is that the big ISPs haven't invested in rural broadband in decades, and even 100% deregulation is not enough incentive for them to do so. The big ISPs wrote off rural America many years ago, so any statements they make to the contrary are purely rhetoric and lobbying.
  • The pandemic also highlighted the stingy and inadequate upload speeds that most big ISPs offer. This is the broadband crisis that arose during the pandemic that the big ISPs aren't talking about. Many urban homes that thought they had good broadband were surprised when they had trouble moving the office and school to their homes. The problem was not with download speeds, but with the upload speeds needed to connect to school and work servers and to talk all day on video chat platforms — activities that rely on a solid and reliable upload speed. Homes have reacted by migrating to fiber when it is available. The number of households that subscribe to gigabit broadband doubled from December 2019 to the end of March 2020.

The big ISPs and the FCC have also made big political hay during the Keep America Connected Pledge crisis, where ISPs promised not to disconnect homes for non-payment during the pandemic. I'm pretty sure the ISPs will soon go silent on that topic because soon the other shoe is going to drop as the ISPs expect homes to catch up on those 'excused' missed payments if they want to keep their home broadband. It's likely that millions of homes that ran out of money due to losing their jobs will soon be labeled as deadbeats by the ISPs and will not be let back onto the broadband networks until they pay their outstanding balance, including late fees and other charges.

The shame of the Keep America Connected Pledge was that it had to be voluntary because the FCC destroyed its ability to regulate ISPs in any way. The FCC has no tools left in the regulatory quiver to deal with the pandemic after it killed Title II regulation of broadband.

I find it irksome to watch an industry that completely won the regulatory battle keep acting like it is under siege. The big ISP lobbyists won completely and got the FCC to neuter itself, and yet the big ISPs miss no opportunity to keep making the same false claims they used to win the regulation fight.

It's fairly obvious that the big ISPs are already positioning themselves to fight off the time when the regulatory pendulum swings the other way. History has shown us that monopoly overreach always leads to a public reaction that demands stronger regulation. It's in the nature of all monopolies to fight against regulation — but you'd think the ISP industry could come up with something new rather than to repeat the same lame arguments they've been making for the last decade about how overregulation is killing them.

Written by Doug Dawson, President at CCG Consulting


circleid.com | 12-Jul-2020 02:33

Macro Musings for Digital Strategies Using Unstructured Data

Hey Siri:

"Write a blog about our latest brand mentions in Shenzhen, China and Bogata, Columbia to include sentiment, peak times when our brand was mentioned, and data on peak purchasing times with product SKUs and images. Thanks Siri. Oh, and another thing, please update our landing pages to reflect this data in the native language of the region. Thanks again, Siri."

When Doug Dawson wrote his article in February on Artificial Intelligence, he felt that #ai is saddled with too much hype as is its current counterpart, 5G. Certainly, there is a great deal of technology out there that deserves the hype and others that do not. After receiving a certification (Intro to AI) from IBM (while using Watson), I'm here to share that artificial intelligence, and its subsets of machine learning and deep learning definitely deserve a soundbite.

While there are many tentacles to AI, including but not limited to, machine learning and deep learning, connectors, and data-lakes, I will attempt to underscore why the impetus to utilize unstructured data is the most important data set for today's digital marketing initiatives.

Forget your structured data and your hell-bent biases.

Unstructured data is the nucleus of an organization, irrespective of its size. The vast majority of an organization's data is unstructured, thanks to organizations, which are heavily investing in IoT and embracing artificial intelligence. This kind of data holds huge potential for business leaders to leverage and gain an edge on initiating real-time campaigns through the use of content connectors. This type of data will help marketers understand their business results in more acute ways, anticipate and react quicker on risk and opportunity. Ged Parton, CEO of Maru Group, comments, "Text analytics technology is a vital resource to analyzing the thousands of unstructured data points generated every day through customer feedback, reviews and service interactions. But too often, as researchers, we're not utilizing these tools to their full potential, clouding the technology by creating our own structures and code-frames and, in essence, introducing our own human biases into the results."

Most leaders in digital marketing currently work with structured data, which is essentially organized. By applying scientific methodologies, we extrapolate hypotheses from this structured data and begin to formulate a predetermined mindset to create specific campaigns in various performance-based channels.

But what do we do with all of the unstructured data? Essentially data that hasn't found a home? Unstructured data comes in many different formats, but the most established types of unstructured data may include books, journals, documents, metadata, health records, audio files, video, analog data, images, files, and unstructured text such as the body of an email message, a web page, or PDF document that languishes on your web properties. This unstructured data can reveal important and specific insights into your site visitors' behavioral components, which can then be used to transform and deploy behavioral-based campaigns in real-time for optimal return. For example, we learned that IBM Watson is capable of understanding emotions when listening to certain audio files. This is widely known. Watson also is able to track and analyze inbound phone calls and extrapolate visitor sentiment. Harnessing sentiment is critical to developing these real-time campaigns.

Unfortunately, humans all too often allow unstructured data to permeate, so through no fault of their own, form unsuspected biases toward a visitor's intent. This is the most critical common mistake humans do with AI: They form biases of this unstructured data, and these biases constrain the ability of machine learning to justify its full output potential. In a subsequent article, I'll attempt to delve into ethics and biases regarding AI, which is tantamount to current projects that will change society. Still, today I want to focus on amplifying advanced digital marketing concepts using unstructured data.

Let's just focus on content for a moment. Another way the subclasses of AI are extensively used in unstructured data is finding and closing gaps in your content marketing. Many modeling tools use machine learning to automatically and accurately group relevant comments together into clusters without the need for user-defined rules and human biases. It empowers users and researchers to identify key themes, understand relationships between trends, and uncover hidden patterns in data in just minutes.

Content marketing is "earned media," and by narrowing this gap, you capture probably the most important segment of visitors you wouldn't normally have found with a human brain. The essential narrative here is that performance metrics in content marketing are perceived to be more desirable from a "lead" perspective than paid media. This is why unstructured data is such a critical component for your marketing arsenal to discover gaps in content and deliver on higher than expected ROI.

From a branding perspective, monitoring unstructured data from various performance channels is also a critical step in your digital transformation journey, since it captures your users' sentiment. The ability to harness this sentiment as a vacuum from disparate parts of the Internet is a game-changer to the health and wellness of a brand.

By immediately capturing sentiment, digital marketers instantly optimize channels and online web properties. Digital Marketers need to be nimble enough to capitalize on the positive sentiment and mitigate liability in case the sentiment turns negative. Machine learning and deep learning evaluate these brand mentions and immediately provide a score sentiment. This is obvious. By having AI monitor your brand in real-time, you can scale digital advertising expenditures to optimize performance when sentiment is positive. Further, you can improve individualized content experiences across all of your channels, assess and evolve creatives, and split test your landing pages across a bevy of tools. Other optimizations at scale include: moderating comments, monitoring ratings and reviews, and pushing geo-targeted ads through your mobile devices where sentiment is most dense.

Machine learning also determines which offers intrinsically motivate site visitors into action and engage users in conversations with bots. My favorite example of these multiple-choice bots for answers using AI is through a company called: www.adt.com Check them out. Their chat extension is highly engaging.

From a production perspective, unstructured data can be analyzed for grammar, sentiment, tone, and style. It can deliver on data-driven content and curate content from multiple sources, as mentioned in the example above. Unstructured data can build landing pages, develop real-time ad copy for your paid channels, and optimize site and web-property content for enhanced search results, speech, and text. Finally, this unstructured data can be translated into native languages while writing your next email subject lines.

I can't say enough about how valuable your unstructured data is. If you are a brand that wants to execute on your digital transformation experience, harnessing your unstructured data should become a priority. When transforming your unstructured data, one thing to keep in mind is that being transparent with the site visitor is of utmost importance. Let the visitor know they are interacting with a chatbot or other AI-enabled features on your site, and this will alleviate any additional friction that may ensue.

Written by Fred Tabsharani


circleid.com | 11-Jul-2020 03:10

Internet of Things Requires a Rethink of Business Models

While there is undoubtedly a lot of interest in machine-to-machine communication (M2M) and the Internet of Things (IoT), what we see is only what is happening on the surface. Most of the M2M activities are taking place unnoticed. For example, most newly produced electronic devices are now all M2M enabled.

Over 100 million smart meters have already been deployed by the electricity industry, with literally hundreds of millions of them in the pipeline. Healthcare is another key industry.

All new hospitals now operate large-scale M2M operations, tracking their equipment with real-time information. Most local governments have invested massively in mapping their assets; this is now being followed up by adding connectivity to these assets — whether it is streetlamps, drainage, sewerage or trees, all are in the process of becoming part of a smart city.

Big Data

A critical element for telecommunications' future is to use the network with all of the M2M devices connected to it, in such a way that it collects the data from these devices, processes that data, and then delivers executable real-time analyses to the users of the M2M services.

This development is also known as "big data." Unless the issue of data analytics is properly addressed, and the companies involved have at least a broad strategy in place around it, IoT and M2M in isolation will make little sense.

IoT Plus M2M

Taking this now one step further, both IoT and M2M cannot be looked at in isolation. For these technologies to be successful, the broader ecosystem needs to be considered. This includes developments in high-speed broadband (fixed and mobile), cloud computing, cybersecurity, data centres and the already mentioned data analytics.

M2M and IoT are very similar in their functionality, communication and collecting data. However, M2M refers to the interaction of two or more devices that are connected to each other. This is about machines, smartphones, and appliances.

IoT is about sensors, cyber-based physical systems, wearable systems, mobile apps, internet-based services and so on. In all reality, the terms are often used in an interchangeable way.

As the M2M and IoT technologies and their markets develop, there is now more insight into its future directions.

Key issues here include:

  • IoT and M2M will increasingly become better defined and more niche market-based features will be added along the line;
  • The broader ecosystems around these technologies will have to be changed as well. M2M and IoT are increasingly becoming more and more business concepts. And the importance of data analytics is at the core of these models;
  • There is still a lack of standards, interoperability, and industry leadership in general and this hampering some of the large-scale developments;
  • The telcos as suppliers of M2M and IoT products and services will need to look for new (telco) business models aimed at the enterprise market; and
  • Especially in relation to the promises of 5G in this market we need to differentiate between what is hype and what is reality.

Written by Paul Budde, Managing Director of Paul Budde Communication


circleid.com | 10-Jul-2020 23:17

The Global Domain Name Market in 2019: Will New TLDs Create a Sensation?

Afnic, the association that manages and operates various TLDs including the .fr, has published its report on the global domain name market in 2019.

The report highlights a slight upturn in the market, which has generally continued the growth initiated in 2018. Thus, the global domain name market accounted for approximately 346 million domain names at the end of December 2019, up 4.7% compared to 4.0% in 2018.

Distribution among the main TLDs:

  • 181 million legacy TLDs (.com, .net, .org, etc.);
  • 132 million ccTLDs (so-called "geographic" TLDs, corresponding to a territory or country like the .fr);
  • 33 million "new TLDs" created from 2014 onwards (nTLDs encompass different segments including the geoTLD: .bzh, .paris, .alsace, etc. TLDs corresponding to brands: .sncf, .mma, community TLDs and generic TLDs).

m DNs: Year-end data expressed in millions of Domain Names

* Other Legacy TLDs: generic TLDs created before 2012, such as .aero, .asia, .biz, .net, .org, .info, .mobi, etc.

** Total gTLDs: measures all the domain names managed under a contract with ICANN.

*** ccTLDs or “country code Top-Level Domains”, domains corresponding to territories, such as the . fr for France. The data presented no longer includes “Penny TLDs” i.e. ccTLDs retailed at very low prices, if not free of charge. These ccTLDs are subject to very large upward and downward movements that do not reflect actual market developments and bias aggregate data.

**** Penny ccTLDs: estimated volume of names filed in these “low-cost” or free domains

More details and, as usual, an in-depth investigation of market trends here.

I hope you enjoy the research and the read!

Written by Loic Damilaville, Market Research Manager at Afnic


circleid.com | 10-Jul-2020 19:56

Don't Kid Yourself, dotCOM Is King for Branding Your Business

There are now more than 1,000 top-level domains (TLDs), but which is best for branding yourself or your business? With search engines, does it even make a difference? What does it matter which TLD you chose as long as you rank high enough on Google?

That is what many would have you believe, but there is one power greater than any search engine.

The Public

At one time, my brother and I owned Rate.com. We leased it to a mortgage broker in Laguna Beach, who used it to generate leads. It was nothing more than the name and a landing page, but it produced about fifty solid leads a week with no advertising or promotion. One day she called, all excited, to say that Rate.com had generated almost a hundred leads in the last 24 hours.

I was stumped.

I've been developing and monetizing names since 1997 and know that one of the great things about popular generic names like Rate.com is that they always have wind in their sales that's easy to monetize, but I'd never seen anything like this. I sat down to have dinner, turned on the TV, and ten minutes later, a commercial for Rate.net.

When Remembering Your Brand the Public Will Default to dotCOM

I have long believed this, but the Rate.com vs Rate.net episode made it glaringly obvious. "Rate" is a popular generic word to remember, and one would believe that any TLD promoted after it would be a cinch to recall.

But that's not what the public decided.

The Key to Brand Success Is Instant Memorability

Instant and accurate memorability is the ultimate key to your brand's success. With so many worshiping at the altar of the almighty search engines, many have forgotten this golden rule, but it's simply Marketing 101.

Yes, search engines are important, very important, but the public's ability to accurately remember your internet address on the first pass supersedes all.

Ironically, most entrepreneurs understand this because they know that instantaneously imprinting their brand on the public's consciousness is usually the difference between success or failure. Still, I've had far too many conversations where they tell me their webmaster or tech department chose their domain name.

And ten minutes later I'll ask the question that is always the kiss of death.

"What is your domain name, again?"

Written by David Castello, Co-Founder at CastelloBrothers.com


circleid.com | 10-Jul-2020 18:41

Bulk WHOIS Lookup of Florida SMMC Lookalike Domains Shows Signs of Typosquatting

A bulk whois lookup of domain names similar to the official website of the Florida Statewide Medicaid Managed Care (SMMC) Program — www[.]flmedicaidmanagedcare[.]com — indicates that a typosquatting event, or a cybersquatting one at the very least, might be at play.

Typosquatting Data Feed detected 45 domain names registered in bulk on 21 June 2020. With more than 4 million program enrollees as of 31 May, such domain registration behavior may require investigation.

Florida SMMC Typosquatting Domains

Typosquatting Data Feed flags domain names that appear on the Domain Name System (DNS) the same day that similar ones do. As such, it can help detect bulk domain registration.

Below are the 45 potential typosquatting domain names found.

It is possible that the owner of the legitimate domain flmedicaidmanagedcare[.]com registered the lookalike domains as part of a typosquatting protection strategy. Hence, it may be helpful to compare the WHOIS record of the official domain with those of the lookalikes.

A Bulk WHOIS Lookup Shows Discrepancies with the Legitimate Domain

With the help of Bulk WHOIS Lookup, we looked at the lookalike domains' ownership details and found that:

  • The registrar of all the likely typosquatting domains is Alibaba Cloud Computing (Beijing) Co., Ltd.
  • All of the registrants' names, email addresses, and organizations were redacted for privacy.
  • The registrant address of each domain is Jiangsu, China.

These details quite differ from the WHOIS registration details of flmedicaidmanagedcare[.]com. WHOIS Lookup revealed that the legitimate domain's registrar is Wild West Domains, LLC. Its registrant details are not hidden. Its registrant organization, Automated Health Systems, located in Pennsylvania, U.S., is indicated as well.

Possible Reason for Bulk Registering Lookalike Domains

While Automated Health Systems may have registered the domains as part of its typosquatting protection strategy, we can't discount the possibility that these could also be part of a typosquatting campaign. And so we dug deeper.

The Florida SMMC Program Online Portal

The Florida SMMC Program is an enhancement to the Florida Medicaid Program, which comprises three components:

  • Florida Long-term Care Managed Care Program
  • Florida Managed Medical Assistance Program
  • Dental Program

Like the Florida Medicaid Program, it has an online portal where members can check their eligibility and enrollment status, enroll and update their medical plans, update their addresses, and request assistance. Members can log in using their username, email address, or phone number and nominated password.

The members' online accounts contain their medical records and other sensitive data that may be worth a significant amount when sold on the Dark Web. Getting hold of the members' usernames and passwords can also give threat actors access to the members' other online accounts.

What is interesting about the bulk registration timing is that the Florida SMMC Program is (coincidentally or not) launching a new member portal on 13 July 2020.

The notification banner lets members know that a new portal is in the works. They do not have to do anything come 13 July, but they won't realize this unless they click the link that says, "Click here to learn more."

If cybercriminals indeed registered the 45 lookalike domains, several members could fall victim to phishing. Threat actors could send time-sensitive emails that say something along the lines of "Your SMMC online account has been disabled" or "Click here to activate your new SMMC online account."

We cannot exclude that Automated Health Systems registered these Florida SMMC lookalike domains detected by Typosquatting Data Feed despite the differences in WHOIS registration details. If that is not the case, however, detecting typosquatting domains as early as possible is crucial, especially in the healthcare industry.


circleid.com | 09-Jul-2020 22:07

Hundreds of Election-Related Domain Names Seen as 2020 U.S. Elections Nears

Even as the world continues to tackle the coronavirus pandemic, essential events just can't be delayed. The U.S. presidential elections will continue to take place on 3 November 2020.

Although it is still months away, discussions are heating up. In parallel, as with other newsworthy events, dozens of election-related domain names are being detected.

Election-Related Domain Name Registration Trends

We started detecting U.S. election-related domain names on 2 June. That day, primaries were also held in Washington, D.C., and seven states, namely, Indiana, Maryland, Montana, New Mexico, Pennsylvania, Rhode Island, and South Dakota.

We tracked election-related typosquatting domain names within the period 2 — 13 June, particularly those containing the following strings:

  • "bide"
  • "trump"
  • "electio"
  • "presiden"

Within 12 days, we saw a total of 216 election-related domain names that appeared on the Domain Name System (DNS).

Spike in Domain Name Registrations After a Big Election-Related Event

The chart above plots the number of domains that contain each string as well as the total. It shows that the number of election-related domain names peaked on the following dates:

  • 3 June: A day after the primaries in Washington D.C and seven states were held. A total of 30 domain names were detected.
  • 5–6 June: The Virgin Islands presidential caucuses were held. Twenty-five domain names were seen on each day.
  • 10 June: Primaries were held in Georgia and West Virginia a day before. Some 29 domain names were detected.

Other election-related events that could shape domain registration are the Kentucky and New York primaries slated on 23 June. With the emerging trend, domain registrations can spike on or after that date. We saw the same thing happen with the coronavirus-themed domain names.

The Anatomy of "Biden" and "Trump" Domain Names

While the tally of "Biden" and "Trump" typosquatting domains seem close (73 and 87, respectively), the themes vary. "Biden" domain names, for instance, hint at who people may want to be his running mate. A few examples are:

  • bidenrice[.]org
  • bidenrice[.]website
  • biderice[.]org
  • bidendemings-us[.]com
  • bidendemings4us[.]com
  • bidendemings-usa[.]com
  • bidenriamondo[.]org
  • bidenriamondo[.]net
  • bidenriamondo[.]com
  • bidenharrisforpresident[.]net
  • bidenharrisforpresident[.]org
  • bidenharrisforpresident[.]com

Some domain names also hint at support for Biden coming from the Ukrainian-American community. We saw 24 domain names on that theme registered in just two days:

The WHOIS records of the Ukrainian-American domain names seemed to have the same registrant when ran through a bulk WHOIS lookup. All of them use the same privacy services, pointing to the address 96 Mowat Ave., Ontario, Canada.

On the other hand, typosquatting domain names that contain the string "trum" had slightly different themes. For one, only the Owen-Trump tandem seemed to be promoting a running mate, although they bear the 2024 and 2028 tags:

  • owenstrump2024[.]org
  • owenstrump2028[.]com
  • owenstrump2028[.]org
  • owenstrump2024[.]com

Some domain names also appeared to show support for Trump, such as:

  • whytrumpiagreat[.]com
  • whyrrumpisgreat[.]com
  • whytrumpisgrear[.]com
  • armyfortrump[.]club
  • armyfortrump[.]live
  • armyfortrump[.]org
  • supporttrumpsleadership[.]com
  • supporttrumpsleadership[.]org
  • supporttrumpsleadership[.]info
  • liberalsfortrumpactioncommittee[.]info
  • liberalsfortrumpactioncommittee[.]org
  • liberalsfortrumpactioncommittee[.]com
  • electrumv[.]org
  • electrumo[.]org

Others also seemed to be against the incumbent president:

  • donaldtrumpisajoke[.]net
  • donaldtrumpisajoke[.]org
  • donaldtrumpisajoke[.]com
  • death2trump[.]golf
  • death2trump[.]org
  • death2trump[.]party
  • donaldtrumpvsthepeople[.]net
  • donaldtrumpvsthepeople[.]org
  • donaldtrumpvsthepeople[.]info
  • pucktrump[.]com
  • fuctrump[.]org
  • fucktrump[.]site
What Election-Related Typosquatting Domains Could Be Up To

It's a known fact that typosquatting domains can be used in nefarious activities such as phishing campaigns, scams, and malware attacks. So what kind of content could these domains possible host?

We can get a glimpse of the domains without having to visit the websites using a screenshot capture tool.

The Biden-inspired domain names that promote running mates, for example, are mostly parked, with some hosting ads.

The same is true for domain names that express support for Trump, although some pages promise to have contents soon.

Other screenshots show that most election-related domains follow the same patterns.They are either parked or under construction, save for a few that are already up and running.

The rise in election-related domain names reinforces the point that new registrations typically follow newsworthy events. While most of these domain may currently be parked or the object of speculative domain investments, they too could turn into phishing entities in the near future.


circleid.com | 09-Jul-2020 03:59

Exceedingly Close and Difficult UDRP Cases

The ordinary run of cybersquatting cases is neither "exceedingly close nor difficult." Quote from Harvest Dispensaries, Cultivations & Production Facilities, LLC v. Rebecca Nickerson / Rock City Harvest, FA2004001892080 (Forum June 26, 2020) which is one of those rare cases in which the case was exceedingly close and difficult. For 90% of the docket (the percentage has been creeping up since 2016), even when Respondent appears (which it mostly does not), there is neither a defense nor merit to Respondent's contentions. The UDRP has hoovered-in cybersquatters by the tens of thousands.

Harvest Dispensaries is not one of those. The Panel explained that there are several "difficulties" in this case:

As a threshold matter, the Panel notes that the instant proceeding presents a complicated series of questions relating to the interconnected disputes between the parties and their affiliates… Although some areas of the analysis in this decision may be rendered moot by the parties' submissions or ancillary findings on the merits of this case [referring to an Arkansas case in which Complainant was not a named defendant and did not request relief with respect to the challenged domain name], the Panel still endeavors to provide a fulsome discussion of all factors considered in a typical UDRP proceeding.

There is, first, a collision between federal and state trademark rights. Ordinarily, of course, there is no issue as to which ultimately prevails in a court of law, but here — and for purposes of adjudication under the Uniform Domain Name Dispute Resolution Policy (UDRP) — Complainant's federal registration predated registrations of the domain names, but it consists of a dictionary word, "Harvest" which is descriptive at best. Respondent plausibly contended that it adopted "Harvest" and obtained an Arkansas trademark registration without knowledge of Complainant's statutory rights.

Respondent offered documentary evidence that it began the process of entering the cannabis market intending to use the fictitious trade name "Harvest" and learned in January 2019 that it would receive a license from the Arkansas Alcoholic Beverage Control Division to operate a marijuana dispensary under the name HARVEST. This predated Complainant's formal telephone conference assertion that the use of "Harvest" would be a trademark infringement.

On these documentary facts, turned the case, even though as the Panel noted, "Respondent's ... actions are not free from suspicion" that it may have become aware of Complainant's mark earlier than it claims, an issue that would be drawn out in a court proceeding. Complainant's problem, again as noted by the Panel, and important for complainants to digest:

The Panel rejects Complainant's argument that the registration of these two domain names was nevertheless in bad faith because Respondent had constructive knowledge of Complainant's trademark registration. Although constructive knowledge may be relevant to certain issues under U.S. trademark law, see 15 U.S.C. § 1072, constructive knowledge is insufficient to support a finding of actual knowledge and bad faith under Policy ¶ 4(a)(iii).

Complainant's federal registration priority over Respondent's use of the term is unquestionable, but Respondent nevertheless prevails on the issue of right or legitimate interests: "Panel finds reasonable evidence within the record that Respondent is commonly known by the HARVEST name under Policy ¶ 4(c)(ii) prior to learning of Complainant and its trademark rights." While this satisfies the tests for the UDRP, it does not foreclose a trademark and cyberdsquatting claim under the Lanham Act. (Disclosure, I was on the Harvest Panel).

Written by Gerald M. Levine, Intellectual Property, Arbitrator/Mediator at Levine Samuel LLP


circleid.com | 08-Jul-2020 18:45

Alphabet's Loon Goes Live With Its Commercial Internet Service in Kenya

Flying 20km over rural Kenya, Loon, in partnership with Telkom Kenya delivers Africa's first balloon based, internet service to rural Kenya. (Photo: Loon)

Alphabet's Loon on Monday announced that its high-altitude balloons are now providing internet service in Kenya to subscribers of Telkom Kenya. This is the first application of balloon-powered internet in Africa and first-ever non-emergency use of Loon. Since its early tests, the company says it has succeeded in connecting over 35,000 unique users, delivering services such as OTT voice/video calling, streaming, and web connectivity.

Current state of coverage: The initial service Kenya spans nearly 50,000 square kilometers across western and central parts of the country, including the areas of Iten, Eldoret, Baringo, Nakuru, Kakamega, Kisumu, Kisii, Bomet, Kericho, and Narok. To cover this area, Loon has deployed a fleet of around 35 separate flight vehicles that are in constant motion in the stratosphere above eastern Africa. "As we continue to add balloons to achieve this target fleet size in the coming weeks, service availability will become more consistent," says Alastair Westgarth, CEO of Loon.

"We don't think we can – or should – replace the ground and space-based technologies that exist today." Westgarth says terrestrial, stratospheric, and space-based technologies will all work together to serve different parts of the globe and use-cases. "The key will be coordinating these various solutions so they provide a seamless connection."


circleid.com | 07-Jul-2020 21:57

Evolving the Internet Through COVID-19 and Beyond

Co-authored by Jari Arkko, Alissa Cooper, Tommy Pauly, and Colin Perkins

As we approach four months since the WHO declared COVID-19 to be a pandemic, and with lockdowns and other restrictions continuing in much of the world, it is worth reflecting on how the Internet has coped with the changes in its use, and on what lessons we can learn from these for the future of the network.

The people and companies that build and operate the Internet are always planning for more growth in Internet traffic. But the levels of growth seen since the start of the global COVID-19 pandemic are nothing that anyone had in their plans. Many residential and mobile networks and Internet Exchange Points reported 20% traffic growth or more in a matter of weeks as social distancing sent in-person activities online around the world. While nearly all kinds of traffic, including web browsing, video streaming, and online gaming have seen significant increases, real-time voice and video have seen the most staggering growth: jumps of more than 200% in traffic and daily conferencing minutes together with 20-fold increases in users of conferencing platforms.

By and large, the Internet has withstood the brunt of these traffic surges. While users have experienced brief outages, occasional video quality reduction, and connectivity problems now and again, on the whole, the Internet is delivering as the go-to global communications system allowing people to stay connected, carry on with their daily lives from their homes, and coordinate responses to the pandemic.

These are impressive results for a technology designed 50 years ago. But the robustness of the Internet in the face of huge traffic surges and shifting usage patterns is no accident. It is the result of continuous improvements in technology, made possible by its underlying flexible design. Some recent proposals have suggested that the Internet's architecture and underlying technologies are not fit for purpose, or will not be able to evolve to accommodate changing usage patterns in coming years. The Internet's resiliency in the face of recent traffic surges is just the latest and most obvious illustration of why such arguments should be viewed skeptically. The Internet has a unique model of evolution that has served the world well, continues to accelerate, and is well-positioned to meet the challenges and opportunities that the future holds.

The Internet is evolving faster today than ever before

As requirements and demands on networks have changed, the Internet's ability to continually evolve and rise to new challenges has proved to be one its greatest strengths. While many people still equate the Internet with "TCP/IP," the Internet has changed radically over the past several decades. We have seen a huge increase in scale, great improvements in performance and reliability, and major advances in security. Most remarkably, this evolution has been almost entirely seamless — users notice improvement in their online experience without any sense of the hard work that went into it behind the scenes.

Even from its early days, the Internet's fundamental structure has evolved. When the mappings of host names to addresses became impractical to store and use, the Domain Name System (DNS) was created. Then the original rigid class-based IP address space was made more flexible with Classless Inter-Domain Routing, which enabled greater scalability.

The pace at which major new innovations like these are integrated into the network has accelerated in recent years. For example, five years ago about 70% of web connections were not secured with encryption. They were vulnerable to observation by anyone who intercepted them. But a renewed focus on security and rapid changes that have made it easier to deploy and manage security certificates have accelerated encryption on the web to the point where just 20% or so of web connections are vulnerable today.

Security protocols are also being updated more rapidly to stay ahead of attacks and vulnerabilities. Transport Layer Security (TLS) is one of the foremost protocols used to encrypt application data on the Internet. The latest version, TLS 1.3, can cut connection setup time in half, and expands the amount of information that is protected during setup. The protocol was also carefully designed to ensure that it could be deployed on the Internet in the broadest way possible. After its finalization in 2018, there was more TLS 1.3 use in its first five months than TLS 1.2 saw in its first five years. Roughly one third of all traffic using TLS has upgraded from TLS 1.2 to TLS 1.3 in a period of 18 months.

Even the basic transport protocols used on the Internet are evolving. For decades, there has been a desire to add features to transports that the original TCP protocol could not support: multiplexed streams, faster setup time, built-in security, greater data efficiency, and the ability to use multiple paths. QUIC is a new protocol that supports all of those features, and is carefully designed with deployability and protocol evolution in mind. Even prior to finalization, initial versions of QUIC have become predominant in traffic flows from YouTube and mobile Facebook applications. QUIC is the foundation for the newest version of HTTP, HTTP/3. This means that faster and more resilient connections can be provided to existing HTTP applications, while opening up new capabilities for future development.

These are just a handful of examples. We have seen the Internet evolve through massive technological shifts, from the rise of cellular data networks and mobile broadband to the explosion of voice, video, and gaming online. We have seen the creation of massively distributed content delivery networks and cloud computing, the integration of streaming and conversational multimedia on the web platform, and the connection of billions of constrained "Internet of Things" devices to the network. And although some Internet systems have been deployed for decades, the pace of technological advancement on the network continues to accelerate.

Keys to successful evolution

This evolvability is an inherent feature of the Internet's design, not a by-product. The key to successfully evolving the Internet has been to leverage its foundational design principles while incorporating decades of experience that teach us how to successfully upgrade a network composed of billions of active nodes all while it is fully operational — a process more colloquially known as "changing the engines while in flight."

The Internet was explicitly designed as a general-purpose network. It is not tailored to a particular application or generation of technology. This is the very property that has allowed it to work well as physical networks have evolved from modems to fiber or 5G and adapt to traffic shifts like the ones caused by the current pandemic. Optimizations for particular applications have frequently been contemplated. For example, "next generation networking" efforts throughout the decades have insisted on the need for built-in, fine-grained quality-of-service mechanisms in order to support real-time applications like voice, video, and augmented reality. But in practice, those applications are flourishing like never before by capitalizing on the general-purpose Internet, optimizations in application design, increased bandwidth, and the availability of different tiers of Internet service.

Modularity goes hand-in-hand with general-purpose design. Internet networks and applications are built from modular building blocks that software developers, architects, network operators, and infrastructure providers combine in numerous different ways to suit their own needs while interoperating with the rest of the network. This means that when it comes time to develop a new innovation, there are abundant existing software stacks, libraries, management tools, and engineering experiences to leverage directly. Tools like IP, DNS, MPLS, HTTP, RTP, and TLS have been re-used so many times that their common usage and extension models and software support are widely understood.

The Internet was also designed for global reach. Endpoints throughout the Internet are capable of reaching each other using common systems of addresses and names even if their local networks use vastly different underlying technologies. Introducing new or replacement addressing or naming schemes intended for global reach therefore requires either complex systems of gateways to bridge between existing Internet-connected systems and new networks or an incentive structure that would cause the majority of existing nodes to abandon the Internet and join the new network. Neither of these offers an obvious path to seamless global interoperability. And gateways would likely constrain future evolution across all layers of the stack.

We have seen struggles over incentives play out with the decades-long advance of IPv6 deployment, as well as with other protocol upgrade designs like DNSSEC and BGPsec. Experience with the development and deployment of these protocols has shown that baking deployment incentives into the design of a protocol itself is key to widespread deployment. Understanding which actors in the industry will be motivated to invest in upgrades and having the protocol design place the onus on those actors is critical.

The TLS 1.3 and QUIC examples highlighted above took these lessons to heart. Both protocols bind security upgrades together with performance improvements, knowing that Internet businesses will invest to achieve better performance and thereby improve security in the process. QUIC likewise allows application developers to deploy without having to rely on operating system vendors or network operators to apply updates, easing the path to widespread adoption.

Testing network innovations at scale in parallel with designing network protocols is also crucial. In the last five years, every major new Internet protocol design effort has been accompanied by the parallel development of multiple (sometimes a dozen or more) independent implementations. This creates extremely valuable feedback loops between the people designing the protocols and the people writing the code, so that bugs or issues found in implementations can lead to quick changes to the design, and design changes can be quickly reflected in implementations. Tests of early implementations at scale help to motivate involvement in the design process from a broader range of application developers, network operators, equipment vendors, and users.

Finally, the Internet uses a collaborative model of development: designs are the product of a community working together. This ensures that protocols serve the multitude of Internet-connected entities, rather than serving a limited set of interests. This model also helps to validate that updates to the network can and will find their way into production-quality systems. Many academic research efforts focused on future Internet designs have missed this component, causing their efforts to falter even with brilliant ideas.

Challenges and opportunities

The Internet faces many technical challenges today and new challenges will continue to arise in the future. At the same time, technological advancements and societal changes create opportunities for the Internet to continue to evolve and meet new needs as they arise.

Security is a multifaceted challenge that has been and continues to be a major area of evolution. Encryption of many kinds of Internet traffic is at an all-time high, yet work remains to mitigate unintentional data leaks and operationalize encryption in core infrastructure services, such as DNS. Strong protections and mitigations are needed against threats as diverse as commercial surveillance, denial-of-service attacks, and malware — all in the context of an Internet that is increasingly connecting devices that are constrained by computational power and limited software development budgets. These aspects must be addressed without intensifying industry consolidation. All of these challenges are increasingly the focus of Internet protocol designers.

Performance will also continue to require improvements in order to meet increasing demands. Protocol designers are only just beginning to sort through how they might leverage the performance gains from QUIC and HTTP/3 for numerous future applications. Scaling up deployment of mechanisms such as active queue management (AQM) for reducing latency, increasing throughput, and managing traffic queues will be needed to handle an ever-changing mix of traffic flows. Innovative approaches such as information-centric networking, network coding, moving computation into the network, establishing common architectures between data centers and edge networks, decentralised infrastructure and integration of quantum technology are the focus of ongoing exploration to respond to current and future performance requirements.

The kinds of networks and devices that benefit from global Internet connectivity or local IP networking (or both) will continue to diversify, ranging from industrial to vehicular to agricultural settings and beyond. Technologies such as deterministic networking, which seeks to provide latency, loss, and reliability guarantees, and new protocols explicitly designed to account for intermittent connectivity and high mobility will all be in the mix as information technology and operational technology continue to converge.

Conclusion

The Internet of 2020 is vastly different from the one where TCP/IP originated, even though variants of those original protocols continue to provide global connectivity. The combination of a general-purpose design, modularity, global reach, and a collaborative engineering model with lessons learned about incentives, implementation, and testing at scale have produced the Internet's winning formula for evolution.

The Internet's unique approach to evolution positions it well as a technology to meet new challenges and seize new opportunities. The central role of the Internet in society, only underlined by the COVID-19 crisis, continues to increase. We hope to never again experience a crisis that causes such disruption and suffering throughout the world, but we are optimistic that, crisis or not, the Internet will continue to evolve to better serve the needs of its users.

About the authors

Author affiliations are provided for identification and do not imply organizational endorsement.

Jari Arkko is a member of the Internet Architecture Board and a Senior Expert with Ericsson Research.

Alissa Cooper is the Internet Engineering Task Force Chair and a Fellow at Cisco Systems.

Tommy Pauly is a member of the Internet Architecture Board and an Engineer at Apple.

Colin Perkins is the Internet Research Task Force Chair and an Associate Professor at the University of Glasgow.

Written by Jari Arkko, A Member of the Internet Architecture Board, Senior Expert at Ericsson Research


circleid.com | 07-Jul-2020 21:35

RSS and Atom feeds and forum posts belong to their respective owners.