Choosing Between Python Requests and urllib Without Guessing
Python gives you two obvious ways to make an HTTP request. One is requests, the
library most people reach for first once they have used it. The other is urllib,
which ships with Python and asks you to do more of the work yourself. Both can fetch a URL.
Both can talk to an API. But they do not ask the same things of the programmer, and that
difference matters more than it seems at first.
What makes this comparison useful for a beginner is not memorizing which one is more popular. It is asking a simpler question. What job is the code actually trying to do? Is the goal to make one quick request with no external dependencies, or is the goal to write HTTP code that stays readable once headers, JSON, sessions, and error handling enter the picture? Once that question is clear, the comparison becomes much less vague.
The Choice Usually Starts With Friction
Most Python code begins with the shortest thing that works. That is normal. It is the same
reason beginners reach for a print() statement while debugging. It is immediate.
It gives feedback. It does not require structure yet.
print("Making request now")
A print() statement is useful for a moment. The problem is that it writes to the
console and eventually becomes clutter. The inverse is easy: remove it when you are done.
Choosing between requests and urllib starts in a similar place. At
first, either one can get a response back. The real difference appears later, when the code has
to remain understandable.
What Each Tool Actually Is
requests is a third-party library built specifically to make HTTP work feel
cleaner. It is not part of the standard library, so you install it once and then import it like
any other package.
pip install requests
urllib is different. It is already included with Python. That means there is no
install step, no extra dependency, and no package management question to answer before the code
runs. That alone makes it useful in restricted environments or small scripts where zero external
dependencies matter more than comfort.
A Basic GET Request Shows the Tone Difference
The easiest comparison is a simple GET request. Both tools can do it, but they do not read the same way.
# requests
import requests
response = requests.get("https://httpbin.org/get", timeout=5)
print(response.status_code)
print(response.text)
# urllib
from urllib.request import urlopen
with urlopen("https://httpbin.org/get", timeout=5) as response:
print(response.status)
print(response.read().decode("utf-8"))
The class performing the action in the first version is requests. In the second,
it is urlopen() from urllib.request. The problem is the same in both
cases: fetch a URL and read the response. The constraint is where the libraries begin to differ.
requests gives you a decoded text response directly. urllib gives you
raw bytes, which means you have to decode them yourself.
That is not a dramatic difference by itself. But it establishes the broader pattern early.
requests tends to do more of the routine HTTP cleanup for you. urllib
tends to expose more of the underlying mechanics directly.
JSON Makes the Difference Clearer
The gap becomes more obvious when JSON enters the picture. Sending JSON and receiving JSON are common tasks in modern Python code, especially when working with APIs.
# requests
import requests
data = {"username": "alice", "score": 42}
response = requests.post(
"https://httpbin.org/post",
json=data,
timeout=5,
)
print(response.json())
# urllib
import json
from urllib.request import Request, urlopen
data = json.dumps({"username": "alice", "score": 42}).encode("utf-8")
request = Request(
"https://httpbin.org/post",
data=data,
headers={"Content-Type": "application/json"},
)
with urlopen(request, timeout=5) as response:
result = json.loads(response.read().decode("utf-8"))
print(result)
The problem is still straightforward: send structured data and read structured data back. The
constraint is that urllib makes you handle serialization, encoding, and header
setup yourself. requests collapses more of that into the json=
argument and the .json() response method.
This is where beginners usually stop feeling the comparison as theory and start feeling it as
effort. The more normal API work you do, the more requests reduces boilerplate.
Headers Are Possible in Both, But One Feels Lighter
Custom headers are another common requirement. Authentication tokens, content negotiation, and user-agent strings all live here.
# requests
import requests
headers = {"Authorization": "Bearer my_token"}
response = requests.get(
"https://httpbin.org/headers",
headers=headers,
timeout=5,
)
# urllib
from urllib.request import Request, urlopen
request = Request(
"https://httpbin.org/headers",
headers={"Authorization": "Bearer my_token"},
)
with urlopen(request, timeout=5) as response:
print(response.read().decode("utf-8"))
This is one of the places where the comparison is less dramatic. Both libraries can send
headers cleanly enough. But the larger difference remains. In requests, this feels
like one more ordinary option. In urllib, it feels like one more thing you are
assembling by hand.
Error Handling Is Different in a Way That Matters
One of the more important differences is how the libraries treat HTTP error responses.
requests and urllib do not fail in the same way by default.
# requests
import requests
response = requests.get("https://httpbin.org/status/404", timeout=5)
response.raise_for_status()
# urllib
from urllib.error import HTTPError, URLError
from urllib.request import urlopen
try:
with urlopen("https://httpbin.org/status/404", timeout=5) as response:
print(response.read())
except HTTPError as exc:
print(f"HTTP error: {exc.code} {exc.reason}")
except URLError as exc:
print(f"Connection error: {exc.reason}")
The difference is important. requests does not automatically raise an exception for
every 4xx or 5xx response unless you explicitly call raise_for_status().
urllib raises HTTPError automatically for those responses.
This is not a case of one library being correct and the other being wrong. It is a behavioral difference you need to know when switching between them. A beginner who assumes they fail the same way will eventually be surprised.
Sessions Are Where Requests Starts Pulling Away
The moment several requests need to behave like one ongoing conversation, Requests becomes much easier to live with.
This is one of the strongest dividing lines between the two tools. If you are making repeated
calls to the same API and want to persist headers, cookies, or connection settings,
requests gives you a built-in Session object.
import requests
session = requests.Session()
session.headers.update({"Authorization": "Bearer my_token"})
response_one = session.get("https://httpbin.org/get", timeout=5)
response_two = session.get("https://httpbin.org/headers", timeout=5)
That makes repeated requests feel coherent. One session object can carry shared behavior across
many calls. urllib does not give you an equivalent abstraction out of the box. If
you want that kind of behavior there, you have to build the structure yourself.
That does not make urllib broken. It just means the standard library version asks
you to supply more of the organization when the task stops being tiny.
The Real Trade Is Convenience Versus Dependencies
A lot of comparisons like this become muddy because they try to turn the decision into a matter
of taste. It is usually less vague than that. requests is more convenient for most
real HTTP work. urllib is more attractive when external dependencies are not
allowed or are not worth adding.
That is the actual trade. Not elegance versus power. Not beginner versus expert. Convenience versus zero-dependency availability.
When Requests Is Usually the Right Choice
requests is usually the better choice when the code is doing more than one-off
fetching. If you are sending JSON, reading JSON, reusing headers, maintaining sessions, or just
trying to keep HTTP code readable, the library earns its place very quickly.
The class performs the action with a cleaner public interface. The problem is ordinary HTTP
work. The constraint is that HTTP stops being “ordinary” the moment it grows repetitive.
requests is built for that repetition.
When urllib Is the Right Tool Anyway
urllib becomes the right answer when external packages are off the table or when
the script is small enough that adding a dependency would be more work than writing the extra
boilerplate. A standard-library-only environment is a real constraint, not a theoretical one.
There is also educational value in using urllib at least enough to understand what
lower-level HTTP handling feels like in Python. It makes the conveniences in
requests easier to appreciate honestly.
This Post Should Not Compete With the Requests Series
It helps to keep the job of this article narrow. This is not the post about retries. It is not
the post about sessions as a pattern. It is not the post about rate limits or reusable API
client design. Its job is simpler. It helps the reader answer one question clearly: should this
project use requests or urllib?
That narrower role keeps it from overlapping too much with the broader API posts. One article helps the reader choose a tool. The others help the reader use that tool well.
What a Beginner Should Keep
The cleanest lesson is this: use requests when you want readable, higher-level
HTTP code and can install a dependency. Use urllib when you need standard-library
only code or deliberately want the lower-level approach.
Both can make HTTP requests. The difference is how much routine work they ask you to carry yourself. For most projects, that difference grows quickly enough that the decision stops being subtle.
No Neat Bow
There is no grand philosophy hidden in this comparison. One tool is friendlier for most real
application work. The other is always present and asks less of the environment. For a beginner,
that is enough truth to move forward with: if the code needs to stay comfortable as HTTP logic
grows, reach for requests. If the environment forbids extra packages,
urllib is there, and it can still do the job.
Further Reading
If you choose requests and want the broader practical article, read
Singleton Sessions, Retries, and Rate Limits in Python Requests
.
If you want the narrower article on URL hygiene before building requests, read Cleaning a Base URL in Python Before It Turns Into a Bug .
If you want the companion article on building a reusable client wrapper, read Building a Small API Client on Top of Python Requests .
Comments