Background Tuning Of Images With DeepLab V3 Using Pixellib

background


If at this time, I say that you simply don’t want any software to edit your photos and might edit by your self with pre-trained deep studying mannequin in python, how superior would that be?

Let’s see how we are able to make it.

But earlier than that one wants to know what foreground and background are.



Source: Pinterest

Required Deep Learning Tools

Now we are going to attempt to perceive the dependency and the way can one obtain the pre-trained model.

To obtain the pre-trained mannequin:

Open your browser and duplicate this URL there, press enter. https://github.com/ayoolaolafenwa/PixelLib/releases/download/1.1/deeplabv3_xception_tf_dim_ordering_tf_kernels.h5

Dependency: Pixellib

Installation: pip set up pixellib

Code:

#importing packages
import pixellib
from pixellib.tune_bg import alter_bg
from matplotlib import pyplot as plt
import numpy as np
from PIL import Image
from IPython.show import Image as img
from pylab import rcParams
rcParams['figure.figsize'] = 10, 10 #it will increase the dimensions of plot
change_bg = alter_bg() #object creation
#right here alter_bg() is a category
print(dir(change_bg)) #the features it consists of
Output:
['__class__',
 '__delattr__',
 '__dict__',
 '__dir__',
 '__doc__',
 '__eq__',
 '__format__',
 '__ge__',
 '__getattribute__',
 '__gt__',
 '__hash__',
 '__init__',
 '__init_subclass__',
 '__le__',
 '__lt__',
 '__module__',
 '__ne__',
 '__new__',
 '__reduce__',
 '__reduce_ex__',
 '__repr__',
 '__setattr__',
 '__sizeof__',
 '__str__',
 '__subclasshook__',
 '__weakref__',
 'blur_bg',
 'blur_camera',
 'blur_frame',
 'blur_video',
 'change_bg_img',
 'color_bg',
 'color_camera',
 'color_frame',
 'color_video',
 'gray_bg',
 'gray_camera',
 'gray_frame',
 'gray_video',
 'load_pascalvoc_model',
 'model',
 'segmentAsPascalvoc']

Loading Pre-Trained DeepLab V3

Here, we are going to load the pre-trained deep studying mannequin that’s DeepLab V3 for our process of background tuning.

#loading pre skilled mannequin
change_bg.load_pascalvoc_model("C:/Users/91884/Desktop/deeplabv3_xception_tf_dim_ordering_tf_kernels.h5")

Loading Images

Now, as we’re prepared with the pre-trained mannequin for background tuning, we are going to load the principle picture and the picture of the required background. 

Main picture:

file_name="C:/Users/91884/Pictures/demo.jpg"

plt.imshow(Image.open(file_name))

OUTPUT

Background picture:

bg_file="C:/Users/91884/Pictures/background.jpg"

plt.imshow(Image.open(bg_file))

OUTPUT

Blur Background

First of all, we are going to blur the background of the principle picture.

change_bg.blur_bg(file_name,average=True,output_image_name="blur1.jpg")

plt.imshow(Image.open('blur1.jpg'))

OUTPUT

image, segmentation, deep learning, picture, deepleab v3, background

Grey Background

In the subsequent step, we are going to make the background gray.

change_bg.gray_bg(file_name,output_image_name="gray.jpg")

See Also


plt.imshow(Image.open('grey.jpg'))

OUTPUT

image, segmentation, deep learning, picture, deepleab v3, background

Changing the background to a Solid Color

In this step, we are going to set the background of the principle picture to s stable color.

change_bg.color_bg(file_name, colours = (225, 225, 225), output_image_name = "colored_bg.jpg")

plt.imshow(Image.open('colored_bg.jpg'))

OUTPUT

image, segmentation, deep learning, picture, deepleab v3, background

Changing the Background

Finally, we are going to change the background of the principle picture.

change_bg.change_bg_img(f_image_path = file_name,b_image_path = bg_file, output_image_name = "new_img.jpg")

plt.imshow(Image.open("new_img.jpg"))

OUTPUT

image, segmentation, deep learning, picture, deepleab v3, background

Conclusion

As we might see above, we have been in a position to tune the background of the picture very successfully. It required a really effort and might be achieved in a only a few straightforward steps even after we have been utilizing deep studying. So utilizing the pre-trained deep studying fashions yield efficient outcomes with much less coding efforts. 

Hope you preferred the article. Stay tuned for extra.

You can observe me on the handles talked about. 
The full code of the above implementation is on the market on the AIM’s GitHub repository. Please go to this link to seek out the pocket book of this code.


If you really liked this story, do be part of our Telegram Community.


Also, you may write for us and be one of many 500+ consultants who’ve contributed tales at AIM. Share your nominations here.

Bhavishya Pandit

Bhavishya Pandit


Understanding and constructing fathomable approaches to drawback statements is what I like essentially the most. I like speaking about conversations whose major plot is machine studying, laptop imaginative and prescient, deep studying, knowledge evaluation and visualization.

Apart from them, my curiosity additionally lies in listening to enterprise podcasts, use instances and studying self assist books.

LEAVE A REPLY

Please enter your comment!
Please enter your name here