requests.get()

+1 Austin Elliott · December 7, 2014
I'm following Bucky's Web Crawler tutorials, and he uses requests.get() and he never fully explained what it does. It sounds simple, but why did he use that instead of urllib.request.urlopen()? It looks like they do the same thing. Am I wrong?

Post a Reply

Replies

Oldest  Newest  Rating
0 Doug Fresh · December 7, 2014
It comes down to preference. In my opinion, request.get() is a much simpler module. 
This code:


import urllib2

gh_url = 'https://api.github.com'

req = urllib2.Request(gh_url)

password_manager = urllib2.HTTPPasswordMgrWithDefaultRealm()

password_manager.add_password(None, gh_url, 'user', 'pass')

auth_manager = urllib2.HTTPBasicAuthHandler(password_manager)

opener = urllib2.build_opener(auth_manager)

urllib2.install_opener(opener)

handler = urllib2.urlopen(req)

print(handler.getcode())
print(handler.headers.getheader('content-type'))

can be done like this with requests.get()
import requests

r = requests.get('https://api.github.com', auth=('user', 'pass'))

print(r.status_code)

print(r.headers['content-type'])

while a couple of lines have been eliminated in order to make the effect a little more apparent, the main thing here is the flexibility of the built-in methods of requests.get(). By eliminating lines of code and declarations, not only is it easier on the programmer, but it is easier to teach from.  It's also a great example of a group of Python programmers getting together to make something easier. At its core, that was the purpose of the requests module. 
  • 1

Python

107,263 followers
About

This section is all about snakes! Just kidding.

Links
Moderators
Bucky Roberts Administrator