What is My User Agent & How to Check Your User Agent?


Find out what your User Agent is revealing about you! Lookup details about a user agent string including operating systems, browser version, and more…


What is My User Agent?

Your Browser User Agent is,

CCBot/2.0 (https://commoncrawl.org/faq/)


Your User Agent String Lookup

Lookup your Web Browser's User agent Strings detail to Parse your user agent string Here, It show your operating systems, Web browser version…

Your operating system does not meet the requirement!

Recommended operating systems:

  • Windows XP
  • Windows Vista
  • Windows 7
  • Windows 8
  • Windows 8.1
  • Windows 10
  • Mac OS X
  • Linux

Your web browser is not supported!

Please try using any of the following web browsers:

  • Internet Explorer 9+
  • Microsoft Edge
  • Firefox 27+
  • Chrome 33+
  • Opera 12+
  • Safari 6+

Related, What Is Your Real Public IP address?

You might doubt whether is it your user agent string? Is that Browser (UserAgent) Info is mine? Okay, I would like to let you know how to find a browser user agent by yourself,


How to check user agent?

  • Check user agent in chrome

Type in “chrome://version” in the address bar.

Chrome User agent

  • Check user agent in Firefox

Type in “about:support” in the address bar.

Firefox User agent

  • Check user agent in Microsoft Edge

Type in “edge://version” in the address bar.


User Agent 101: Ultimate guide to Web Browser's UA Strings

Do you want to learn about the concept of user agents in web networking? This page has been written to educate you on the concept. On this page, you are going to learn what a user agent is, what it is used for, the problems associated with its usage, and many more.

User Agent 101

As an Internet user making use of web browsers, the whole model of the Internet is abstracted and made easy to you. There is a lot going on behind the scene with your web browser, helping you out in carrying out some of the tasks.

It might interest you to know that when you send a web request to a server on the Internet, your browser identifies itself and provide other technical information that the web server needs to provide a better response. Without a browser identifying itself, web servers will only return a generic response which might not render well depending on the web browser used.

And no, the concept of the user agent is not associated only with web browsers. I used web browsers as they are the most common ones known to Internet users. Crawlers, web scrapers, game consoles with access to the Internet, and even smart TVs or some IoT devices are user agents. After reading this article in full, you will have enough knowledge about the user agent.


What is a User Agent?

A user agent is any software that interacts with web servers on behalf of Internet users. They can also be seen as a bridge between you and the Internet.

Any software that sends web requests to web servers is a user agent whether its works independently of human interaction as it is in the case of automation tools or bots or in the case of web browsers and other software that accept direct commands from humans.

Take, for instance, if you want to send access content online, you will have to make use of a web browser that serves as the user agent that deals with retrieving, rendering, and making it possible for you to interact with the content.

YouTube video

In a network protocol, the client is seen as the user agent, which is used in communication with a client-server network system. It might interest you to know that your email reader is a mail user agent.

User agents do not stop there – your gaming console can be a user agent, so is your smart TV and other Internet-enabled devices. In the Hypertext Transfer Protocol (HTTP), clients (user agents) are identified using the user-agent header.


User Agent Identification

User Agent Identification

As stated earlier, when a client software sends a web request to a web server, it sends its identity alongside the requests providing the Internet server information about itself, which includes but not limited to its name, application type, operating system, software version, software vendor, rendering engine, and among other information provided.

For web crawlers, web scrapers, and other automation bots, it is a convention for them to includes the word “bot” in a URL or any other contact detail that can be used to reach its operators. All of these details are put together in a string known as the user agent string. The user-agent string is bundled into the user-agent header in an HTTP web request.

Take, a standard Google Chrome browser running on a Windows Operating System will send the below string as its user agent string.

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36

On iPhone, Chrome identifies itself in the string below.

Mozilla/5.0 (iPhone; CPU iPhone OS 14_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) CriOS/87.0.4280.77 Mobile/15E148 Safari/604.1

Every other browser, web bot, and other client application has its own string its use in identifying itself. Looking at the strings above, you can see that it carries the name of the browser, its OS and platform, as well as versions, among others.

One thing you need to take note of is that while there is a standard in naming, there is no compulsion in following the standard, and as such, some user agent strings can have just the name of the application or use an arbitrary or fake name. Some bot developers go as far as using the user agent of popular browsers in others to hide their activities.

Bad Bot 101: What is it & How to Detect and Block Bad Bots?

Uses of User Agents

Uses of User Agents

You might be wondering why would client software identify itself and what do web servers need that information for? It turns out that user agents have two major uses. These include content negotiation and access granting and blocks.

  • Content Negotiation

There are many variants of a web page served to devices based on their capabilities. Take, for instance, the structure of the Google search engine result page varies depending on the browser or platform you are using to access it. By looking at the user agent string, Google is able to serve you the best version for your browser and device.

There are many other sites on the Internet that make use of user-agent for providing a better user experience. Without a user-agent, at best, you are served the generic version of a page, which may or may not render well on your browser. Bot developers use this to walk around avoiding JavaScript-rich site by using mobile browser agents that will get web servers to return a non-JavaScript heavy version of a page.

  • Access Negotiation and Blocks

Perhaps, the most popular use of the user-agent string is to know whether a particular client software has the access right to access certain content or not. Web servers use the user-agent string in an HTTP request header to exclude crawlers, scrapers, and other bots from accessing their platform.

Many of the popular websites on the Internet frown at bot traffic and, as such, will deny access to user-agents other than that of popular browsers. While they do this internally, they can provide web crawlers signals via the robots.txt file – and expect you to follow the directives in there. Generally, web servers only want to allow access to traffic originating from a user and tend to block traffic from automated sources, except there’s a benefit for them.


The Robots.txt File and User-Agent

The Robot Exclusion Standard, otherwise known as the robots.txt file, is a standard of communication used by web services to communicate specific directives to automation bots such as crawlers and scrapers. This standard informs a web bot whether it is allowed to access content on its pages or not. Some websites do not even have a robots.txt file. Others have and provide an extensive directive using it.

With a robots.txt, you can give directives to specific bots or all bots. Some of the directives present in robots.txt files include granting and denying all bots or specific bots access, giving directives on crawl rate, and on certain pages you do not want bots to access.

The Robots.txt File and User-Agent

Aside from the fact that not all web services permit web scraping automated access, some have lowered infrastructures and as such, provide directive on how their site should be accessed in an automated manner to avoid adversely affecting it server performance. One unfortunate thing about robots.txt files is that bot developers and operators do not respect them.

In all fairness, web crawlers and other bots are supposed to parse the robots.txt file to determine if their user-agent is allowed access or not. However, most bot developers and operators completely disregard the robots.txt file. And the worse part of it is – it is easy to manipulate the value of the user-agent string header.


Most Common User Agent Strings

In this part of the article, we will be listing the user agent string of the most popular user agents. As you will get to discover later in the article, you can use change your software user agent to that of another software user agent and then get web services you visit to treat it as that software you used as its user agent.

Top 1000 User Agents <Updated 5 Minutes Ago>

When it comes to the most popular user agents, we will look into 3 categories, browsers, search crawlers, and others.

Popular Browser User Agent Strings

Even for the same browser but on a different operating system, the string varies, and as such, there are a good number of user-agent strings for browsers. For this reason, our focus will be on only a few.


  • Standard Chrome User-Agent

Mozilla/5.0 (Macintosh; Intel Mac OS X 11_0_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36
  • Android Chrome User-Agent

Mozilla/5.0 (Linux; Android 10) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.86 Mobile Safari/537.36
  • iPhone Chrome User-Agent

Mozilla/5.0 (iPhone; CPU iPhone OS 14_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) CriOS/87.0.4280.77 Mobile/15E148 Safari/604.1

  • Firefox Windows User-Agent

Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:83.0) Gecko/20100101 Firefox/83.0
  • Android Firefox User-Agent

Mozilla/5.0 (Android 11; Mobile; rv:68.0) Gecko/68.0 Firefox/83.0
  • iPhone Firefox User-Agent

Mozilla/5.0 (iPhone; CPU iPhone OS 11_0_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) FxiOS/29.0 Mobile/15E148 Safari/605.1.15

  • Standard Safari User-Agent

Mozilla/5.0 (Macintosh; Intel Mac OS X 11_0_1) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.1 Safari/605.1.15
  • iOS Safari User-Agent

Mozilla/5.0 (iPhone; CPU iPhone OS 14_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0 Mobile/15E148 Safari/604.1

  • Standard Edge User-Agent

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36 Edg/87.0.664.55
  • Android Edge User-Agent

Mozilla/5.0 (Linux; Android 10; HD1913) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.86 Mobile Safari/537.36 EdgA/45.11.2.5102
  • iOS Edge User-Agent

Mozilla/5.0 (iPhone; CPU iPhone OS 14_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0 EdgiOS/45.11.1 Mobile/15E148 Safari/605.1.15

Just like browsers, each search engine has a good number of crawlers and as such, there are many user agent strings for crawlers out there. let zero in for one per search engine.

  • Google User-Agent:

Googlebot/2.1 (+http://www.googlebot.com/bot.html)

  • Bing User-Agent:

Mozilla/5.0 (compatible; bingbot/2.0 +http://www.bing.com/bingbot.htm)
  • Baidu User-Agent

Baiduspider+(+http://www.baidu.com/search/spider.htm)
  • Yahoo User-Agent

Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)

  • DuckDuckGo User-Agent:

DuckDuckBot/1.0; (+http://duckduckgo.com/duckduckbot.html)
  • Yandex User-Agent

Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)

Aside from the user agents for browsers and search engines, there are a good number of other popular user agents out there. Let take a look at some of them.

  • Alexa User-Agent

ia_archiver (+http://www.alexa.com/site/help/webmasters; [email protected])
  • Facebook External Hit User-Agent

facebookexternalhit/1.0 (+http://www.facebook.com/externalhit_uatext.php)
  • Google AdSense User Agent

Mediapartners-Google

Download User Agent List

We used to run an Andriod scraping project that needs a lot of user-agent strings data, I would like to share it here. take note all those User-Agents come from Andriod devices, I've sorted them by Mobile Brand, Browser, Country, and Version.


User Agent Spoofing

The user agent string is set solely by the developer of the client application. While web browsers, beneficial crawlers, and other good clients follow the naming convention, bot developers do not follow the convention. In fact, it is a common practice for bot developers or operators to use the user agent string of a popular web browser in a bid to remain unnoticed and evade anti-bot systems.

The process of using an arbitrary user agent string such as using the one of a popular browser like Chrome is what is known as user agent spoofing. You can easily use,

  • User-Agent Switcher

User-Agent Switcher for spoofing

  • How To spoof user Agent manually?

Still, you can change user agent via developer tool in Chrome without any extensions, Here is How,

YouTube video

Most bot developers will use either the user agent of Chrome or Googlebot.

google bot user-agent

User agent spoofing is not ethical as it does not provide any trace for web administrators to contact you should they need to speak to you.

However, because most web services do not allow bots access to their service, bot developers have developed the habit of spoofing user agents. The can be effective when dealing with good bots, but in the case of bad bots, you can be assured that they will leave no trace by using a faked user agent string.


Related,


The Problem with User Agent Strings

The fact that the user agent is modifiable by clients makes them unreliable. Web administrators cannot rely on them to protect their servers against bot traffic. Bot developers can use them together with other technology to deceive web servers.

But no, that’s not just where the problems of user-agent strings stop. Another major issue is in the area of privacy as the user agent string has been discovered to be somewhat “fingerprintable.” For those with limited knowledge of this, there are some web services that track users based on their browser information. This is known as browser fingerprinting, and the user agent string is one of the components.

user agent browser fingerprinting

As more and more people are becoming conscious of privacy online, there is a need to either drop its usage or make it less fingerprintable. Interestingly, browser vendors are working on a new system that will provide a means of client identification without “fingerprintable” features.


The Future of User Agent String

The web is moving towards a time where the user agent string will be history. In other words, the user agent string does not have a place in the future as certain technologies are actively being developed to replace them. They are termed messy, unreliable, and a source of fingerprinting – and a better alternative has to be in place.

Google is championing the development of Client Hint, the technology that will replace the user agent string. Google is taking a step to phase out user-agent strings in Chrome browsers, and other web browser vendors have shown interest.

Currently, Google is unifying the user-agent string of Chrome browsers so that all you will know from the user agent string is that it is a Chrome browser running on a desktop or mobile.

user agent Client Hint

From the above, you can see that the future of the user agent string is Client Hint. What Client Hint will do is that it queries web browsers to return certain information about itself without revealing too much information that can be used in tracking. Using the information provided, web servers can use the hint for content negotiation, which is the main use of the user agent string.

Conclusion

There is no doubt that the user agent string has its place in the area of content negotiation and the identification of client application for access right determination. However, they have been termed as messy and termed as a “fingerprintable” detail that helps throw privacy out through the window.

For this reason, a better alternative is being searched as Client Hint is the likely candidate. For now, it is still in use, and you need to consider it in the scheme of things.


Popular Proxy Resources