Image Scrapper! steals all images off any buckys room page!

+8 Isaiah Rahmany · September 4, 2014
__author__ = 'Isaiah'

import wget
import random as ran
from bs4 import BeautifulSoup
import requests
urlmain = input("Enter a url to take all images from -> ")

r = requests.get(urlmain)
soup = BeautifulSoup(r.text)

src = []

for link in soup.find_all('img'):
    rannum = ran.randrange(1, 1000)
    filename = str(rannum) + ".jpg"
    url = link.get('src')
    finalurl = "" + url, filename)

this is the code extremely simple if you ask me

uses 3 external libraries which are:

Post a Reply


Oldest  Newest  Rating
0 Steven the awesome · September 8, 2014
it is working but when I try it for instagram or facebook. It doesn't work.
0 Isaiah Rahmany · September 9, 2014
Lol yea you need to sorta fix the final url
0 Steven the awesome · September 9, 2014
So how do you do that
0 Isaiah Rahmany · September 10, 2014
Well it wont work for all sites but you can change the urlmain to the main websites page like is the base directory for all the images
+1 Yoncho Yonchev · September 11, 2014
I don't know why, but I couldn't properly instal wget with python 3.4, that is default python3 on my Mint 17 OS so I used shutil that is standard library to do the download and I write my own image grabber.
Btw, are you using virtualenv, it is a must if you code with python. It installs libraries only in your virtual  environment and your other workspaces work without any problem and without mixing dependencies.
  • 1



This section is all about snakes! Just kidding.

Bucky Roberts Administrator