Building a Small API Client on Top of Python Requests
A small client gives repeated HTTP actions one home instead of scattering request logic across the whole codebase.
The object owns the base URL, the reusable session, and the request methods that belong to that relationship.
This is not about bigger abstraction. It is about reducing repetition and making API code easier to read.
A lot of beginner Python code that talks to APIs starts the same way. You import
requests, send a get() request, inspect the response, and move on.
That is a perfectly good beginning. The trouble starts when the code grows. Soon you are making
GET, POST, PUT, and DELETE calls in different
parts of the program, each one carrying slightly different setup, slightly different URLs, and
slightly different habits. The requests still work, but the structure starts to blur.
What helps here is not a bigger abstraction. It is a narrower one. Instead of thinking about “API architecture” in broad terms, ask a simpler question: if one session already represents one relationship with an API, should that same object also expose the request methods that belong to that relationship? For a beginner, that is the point where a session wrapper starts becoming a small client.
The First Version Usually Repeats Itself
The most direct way to talk to an API is to call requests functions one by one.
That works, but it also spreads request behavior across the codebase.
import requests
response = requests.get("https://api.example.com/users", timeout=10)
print(response.status_code)
response = requests.post(
"https://api.example.com/users",
json={"name": "Raell"},
timeout=10,
)
print(response.status_code)
There is nothing wrong with this when the script is small. The class performing the action is
the requests module itself. The problem is repetition. The constraint appears when
the same base URL, timeout rules, headers, or retry behavior must be remembered in multiple
places. The inverse is familiar: pull the shared behavior into one place so that the rest of
the code stops repeating itself.
One Session Is Already a Step Toward Structure
A session gives the code a steadier center. Instead of treating every request as a disconnected event, the program starts treating several requests as part of one ongoing interaction.
import requests
session = requests.Session()
response = session.get("https://api.example.com/users", timeout=10)
print(response.status_code)
That already improves the shape of the code. But for a beginner, the next useful step is not merely “use a session.” It is “give the session a cleaner interface for the work it keeps doing.”
Why a Small Client Helps
A small API client is useful when the code keeps making the same kinds of requests to the same service and needs one place to express how those requests should behave.
This is the point where a wrapper becomes reasonable. Not because wrappers are impressive, but
because repeated GET, POST, PUT, and DELETE
calls start asking for a home. A client object gives those methods one owner.
The goal is modest. Keep the session in one place. Keep the base URL in one place. Let the object expose the request methods that belong to that session.
A Simple Session Manager With Request Methods
import requests
class SessionManager:
_instances = {}
def __new__(cls, base_url):
if not base_url.startswith(("http://", "https://")):
base_url = f"https://{base_url}"
if base_url not in cls._instances:
instance = super().__new__(cls)
instance.base_url = base_url.rstrip("/")
instance.session = requests.Session()
cls._instances[base_url] = instance
return cls._instances[base_url]
def get(self, path, **kwargs):
return self.session.get(f"{self.base_url}/{path.lstrip('/')}", **kwargs)
def post(self, path, data=None, json=None, **kwargs):
return self.session.post(
f"{self.base_url}/{path.lstrip('/')}",
data=data,
json=json,
**kwargs,
)
def put(self, path, data=None, json=None, **kwargs):
return self.session.put(
f"{self.base_url}/{path.lstrip('/')}",
data=data,
json=json,
**kwargs,
)
def delete(self, path, **kwargs):
return self.session.delete(
f"{self.base_url}/{path.lstrip('/')}",
**kwargs,
)
The class performs the action by controlling one session per base URL and by exposing the request methods that belong to that session. The problem it solves is scattered request logic. The constraint is that repeated direct calls become harder to keep consistent as the code grows.
Why the Base URL Belongs Inside the Object
Beginners often pass full URLs into every request call. That works, but it quietly repeats the same information over and over. A cleaner approach is to let the client own the base URL and let each method accept only the path that changes.
client = SessionManager("api.example.com")
response = client.get("/users", timeout=10)
print(response.status_code)
response = client.post(
"/users",
json={"name": "Raell"},
timeout=10,
)
print(response.status_code)
That is easier to read because the repeated part of the address is no longer cluttering every call site. The object already knows where it is talking.
This Is Not Really About HTTPS Alone
It is worth being precise here. Prefixing a URL with https:// is useful, but that
alone is not enough to justify a whole post. The more meaningful lesson is that once your code
is already managing a session, it makes sense for that same object to expose the request
methods that belong to that secure connection.
In other words, the real improvement is not “now the code can do HTTPS.” The
requests library could already do that. The real improvement is that the code now
has one small client object that owns its base URL, owns its session, and presents a clearer
interface for common request types.
How This Keeps the Code Cleaner
Once the methods are attached to the client, the rest of the program stops needing to remember as much. It no longer has to keep rebuilding URLs. It no longer has to decide ad hoc whether a given request should use the shared session or not. That responsibility has already been assigned.
client = SessionManager("https://api.example.com")
user_response = client.get("/users/42", timeout=10)
update_response = client.put(
"/users/42",
json={"name": "Updated Name"},
timeout=10,
)
delete_response = client.delete("/users/42", timeout=10)
print(user_response.status_code)
print(update_response.status_code)
print(delete_response.status_code)
This is not a large abstraction. That is why it works. It simply gives repeated HTTP actions a stable place to live.
What This Post Should and Should Not Claim
This pattern does not solve every API concern. It does not yet explain retries in depth. It does not explain exponential back-off. It does not explain rate limiting. Those are separate jobs. This post is narrower. Its job is to show how a shared session can become a small, clear client with explicit request methods.
That distinction matters because once a series has several posts on related request behavior, each post needs one clear responsibility. This one is about client shape. Another can be about retry behavior. Another can be about pacing. Another can be about URL normalization.
What a Beginner Should Keep
The most useful beginner lesson here is not that every project needs an API client class. It is
that once your code starts making the same kinds of requests to the same service repeatedly,
those methods deserve one home. A small wrapper around requests.Session() is often
enough.
The class owns the session. The object owns the base URL. The methods own the repeated HTTP actions. That is a cleaner shape than scattering those responsibilities through the rest of the code.
Frequently Asked Questions
These are the practical questions beginners usually have when a reusable session starts becoming a small API client.
Why build a small API client instead of calling requests.get() everywhere?
Because repeated direct calls start duplicating base URLs, request habits, and session behavior across the codebase.
What does the client actually own?
It owns the reusable session, the base URL, and the request methods that belong to that relationship with the API.
Why is the base URL better inside the object?
Because the repeated part of the address only needs to live in one place, which makes every request call easier to read.
Does every project need a client class like this?
No. Small scripts may not need it. It becomes useful when repeated request behavior starts to feel scattered or repetitive.
Why expose get(), post(), put(), and delete() as methods?
Because those repeated HTTP actions then have one stable interface instead of being rebuilt ad hoc throughout the program.
Is this mainly about HTTPS support?
No. The real value is not that the code can use HTTPS. Requests could already do that. The value is the cleaner ownership of session and request behavior.
What does the Singleton-style part do here?
It keeps one shared session manager per base URL, which helps reuse session behavior instead of recreating it unnecessarily.
What is the simplest takeaway from this pattern?
Once repeated API calls to the same service start to pile up, give them one home instead of letting them stay scattered.
No Neat Bow
There is no dramatic ending here because the idea itself is modest. You start with direct API calls. You notice repetition. You give one session one clearer interface. That is all. But that small move matters because it turns repeated request code into something easier to read, easier to reuse, and easier to extend later when retries, back-off, or rate limits become part of the picture.
Further Reading
If you want the broader entry point for this topic, read Singleton Sessions, Retries, and Rate Limits in Python Requests .
If you want the narrower follow-up on temporary failures, read Enhancing Resilience in Your Python Requests with Exponential Back-off .
If you want the utility post for URL hygiene, read Ensuring a Clean Base URL with Python’s urlparse .
Comments