Aicte Edukills Google Ai-Ml Virtual Internship: Bachelor of Technology IN Computer Science and Engineering
Aicte Edukills Google Ai-Ml Virtual Internship: Bachelor of Technology IN Computer Science and Engineering
Aicte Edukills Google Ai-Ml Virtual Internship: Bachelor of Technology IN Computer Science and Engineering
VIRTUAL INTERNSHIP
A Internship Report Submitted at the end of seventh semester
BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE AND ENGINEERING
Submitted By
BODDEPALLI MEENESH
(21981A0520)
This is to certify that this project entitled “GOOGLE AI-ML” done by “BODDEPALLI
MEENESH (21981A0520)” is a student of B.Tech in the Department of Computer Science and
Engineering, Raghu Engineering College, during the period 2021-2025, in partial fulfillment for the award of
the Degree of Bachelor of Technology in Computer Science and Engineering to the Jawaharlal Nehru
Technological University, Gurajada Vizianagaram is a record of bonafide work carried out under my
guidance and supervision.
The results embodied in this internship report have not been submitted to any other University or
Institute for the award of any Degree.
External Examiner
DISSERTATION APPROVAL SHEET
This is to certify that the dissertation titled
Google AI-ML Virtual Internship
BY
BODDEPALLI MEENESH (21981A0520)
Dr.P.Appala Naidu
INTERNSHIP GUIDE
(Professor)
Internal Examiner
External Examiner
Dr.R.Sivaranjani
HOD
(Professor)
Date:
DECLARATION
This is to certify that this internship titled “GOOGLE AI-ML” is bonafide work done by me,
impartial fulfillment of the requirements for the award of the degree B.Tech and submitted to the
Department of Computer Science and Engineering, Raghu Engineering College, Dakamarri.
I also declare that this internship is a result of my own effort and that has not been copied from
anyone and I have taken only citations from the sources which are mentioned in the references.
This work was not submitted earlier at any other University or Institute for the reward of any
degree.
Date:
Place:
BODDEPALLI MEENESH
(21981A0520)
CERTIFICATE
ACKNOWLEDGEMENT
I express sincere gratitude to my esteemed Institute “Raghu Engineering College”, which has
provided us an opportunity to fulfill the most cherished desire to reach my goal.
I take this opportunity with great pleasure to put on record our ineffable personal indebtedness to
Mr. Raghu Kalidindi, Chairman of Raghu Engineering College for providing necessary
departmental facilities.
I would like to thank the Principal Dr. CH. Srinivasu of “Raghu Engineering College”, for
providing the requisite facilities to carry out projects on campus. Your expertise in the subject matter
and dedication towards our project have been a source of inspiration for all of us.
I sincerely express our deep sense of gratitude to Dr.R.Sivaranjani, Professor, Head of
Department in Department of Computer Science and Engineering, Raghu Engineering College, for
her perspicacity, wisdom and sagacity coupled with compassion and patience. It is my great pleasure
to submit this work under her wing. I thank you for guiding us for the successful completion of this
project work.
I would like to thank KARTHIK PADMANABHAN, EDUKILLS FOUNDATION for
providing the technical guidance to carry out the module assigned. Your expertise in the subject matter
and dedication towards our project have been a source of inspiration for all of us.
I extend my deep hearted thanks to all faculty members of the Computer Science department for
their value based imparting of theory and practical subjects, which were used in the project.
I thank the non-teaching staff of the Department of Computer Science and Engineering, Raghu
Engineering College, for their inexpressible support.
Regards
BODDEPALLI MEENESH
(21981A0520)
ABSTRACT
The AI/ML internship at Google is a super-strict platform designed to allow the
participants to learn in depth about artificial intelligence and machine learning including but
not limited to use of TensorFlow. Interns undertake different areas of work, including the
theory and practice course that are meant to promote practical application of the knowledge
gained.
Efficiency of the course will be presented by unlocking badges through the Google
Developer Profile successively which means earning proficiency in essential skills like object
detection, image classification and product image search.
By working in real-time with TensorFlow over the course of the internship, the
participant will have the capability of designing and deploying machine learning models
without a problem. They also resort to Google Colab, an online platform that provides an
environment for scientific computing and for writing scientific documents (which is called
Jupyter notebook). This makes experimentation and model development more flexible.
Kapotasteria program structure involves having mentorship meetings, group projects, and
code reviews together with other interns, aiming at a balanced learning experience and skill
building.
These skills are put to good use in the real world during the legal application of AI/ML.
During code reviews and presentations, communication and problem-solving skills are further
seen in action, thus preparing students for real-world workplace scenarios. Technology
employed in the program accurately reflects the industry standards and the emphasis on team
learning enables participants to be trained with skills and knowledge that are highly valued in
the modern AI/ML. At the end of the internship, trainers, with strong knowledge of the basic
approaches to Artificial Intelligence/Machine Learning, obtain a practical toolkit that helps
them to solve these tasks. It is through the internship program that trainees are well equipped
to handle opportunities in future in the upcoming digital space.
Table of Contents
S.NO CONTENT
1. Introduction
8. Conclusion
1.INTRODUCTION
Today's world needs understanding and handling of visual information in almost every field, for
instance: self-driving technology, e-health, e-commerce among others. Computer vision, a branch of
artificial intelligence, involves the capacity of computers to process visual information and has a number
of applications that improve user and business processes.
In this project we will focus on the application of TensorFlow in solving important computer vision
problems namely: object detection, image classification and product search. TensorFlow is a broad open-
source library created by Google that allows the easy integration of advanced solutions into existing
architectures.
Self-driving cars and the pattern recognition technologies require object detection whereas image
classification is useful in enhancing medical images as well as content security. Besides that, visual
product search has also gained importance especially in online shopping where images are used to source
for products instead of written text.
Within the scope of this project, we will consider the methods, algorithms, datasets, and other
relevant components related to the aforementioned tasks. We will demonstrate the application of
TensorFlow in building effective and precise vision systems that are implemented in practice.
3.1 Add ML Kit Object Detection and Tracking API to the project
fig 3.1 interface of the object fig 3.2 capturing photo in fig 3.3 captured image
detection mobile app starter app
The TFLite Task Library makes it easy to integrate mobile-optimized machine learning models
into a mobile app. It supports many popular machine learning use cases, including object detection,
image classification, and text classification. You can load the TFLite model and run it with just a few
lines of code.
The starter app is the minimal Android application that:
- Uses either the device camera or available preset images.
- Now contains methods for taking pictures and presenting object detection output.
You will add functionality for object detection within the application by filling out the method
`runObjectDetection()`
The functions are defined as follows:
`runObjectDetection(bitmap: Bitmap)`: It is a function that conducts object detection on an input image.
It uses the object detection algorithm.
Add a Pre-trained Object Detection Model
● Download the Model. The pre-trained TFLite model is EfficientDet-Lite. This model is designed
to be mobile efficient, and it's trained on the COCO 2017 data set.
● Add dependencies
● Configure and Perform Object Detection
● Rendering the Detectors Results
● Train a Custom Object Detection Model
● You will train a custom model to detect meal ingredients using TFLite Model Maker and
Google Colab. The dataset is composed of some labeled images of ingredients like cheese and
baked products.
fig 4.2 accuracy of the predicted items
Developed an Android application that can detect objects in images, first by a TFLite pretrained
model, then train and deploy the learnt object detection model. You have utilized TFLite Model Maker
for model training and TFLite Task Library for its integration into the application.
5. Get started with product image search
5.1 Detect objects in images to build a visual product search with ML Kit:Android
Have you seen the Google Lens demo, where you can point your phone camera at an object and
find where you can buy it online? If you want to learn how you can add the same feature to your app,
then this codelab is for you. It is part of a learning pathway that teaches you how to build a product
image search feature into a mobile app.
In this codelab, you will learn the first step to build a product image search feature: how to detect
objects in images and let the user choose the objects they want to search for. You will use ML Kit
Object Detection and Tracking to build this feature.
The onObjectClickListener is called whenever the user taps on any of the detected objects on the screen.
It receives the cropped image that contains only the selected object.
The code snippet does 3 things:
● Takes the cropped image and serializes it to a PNG file.
● Starts the ProductSearchActivity to execute the product search sequence.
● Includes the cropped image URI in the start-activity intent so that ProductSearchActivity can
retrieve it later to use as the query image.
There are a few things to keep in mind:
● The logic for detecting objects and querying the backend has been split into 2 activities only to
make the codelab easier to understand. It's up to you to decide how to implement them in your
app.
● You need to write the query image into a file and pass the image URI between activities because
the query image can be larger than the 1MB size limit of an Android intent.
● You can store the query image in PNG because it's a lossless format.
6.3.2 Explore the product search backend
Build the product image search backend
This codelab requires a product search backend built with Vision API Product Search. There are
two options to achieve this:
Option 1: Use the demo backend that has been deployed for you
Option 2: Create your own backend by following the Vision API Product Search quickstart
You will come across these concepts when interacting with the product search backend:
● Product Set: A product set is a simple container for a group of products. A product catalog can
be represented as a product set and its products.
● Product: After you have created a product set, you can create products and add them to the
product set.
● Product's Reference Images: They are images containing various views of your products.
Reference images are used to search for visually similar products.
● Search for products: Once you have created your product set and the product set has been
indexed, you can query the product set using the Cloud Vision API.
6.3.3 Understand the preset product catalog
The product search demo backend used in this codelab was created using the Vision API Product
Search and a product catalog of about a hundred shoes and dress images. Here are some images from the
catalog:
The reference images of the demo product search backend was set up to have public-read
permission. Therefore, you can easily convert the GCS URI to an HTTP URL and display it on the app
UI. You only need to replace the gs:// prefix with https://storage.googleapis.com/.
6.8 Implement the API call
Next, craft a product search API request and send it to the backend. You'll use Volley and Task
API similarly to the product search API call.
6.9 Connect the two API requests
Go back to annotateImage and modify it to get all the reference images' HTTP URLs before
returning the ProductSearchResult list to its caller.
Once the app loads, tap any preset images, select an detected object, tap the Search button to see
the search results, this time with the product images.
fig 6.3 interface of product image search app after connecting the two APIs
For the rest of this lab, I'll be running the app in the iPhone
simulator which should support the build targets from the codelab.
If you want to use your own device, you might need to change the
build target in your project settings to match your iOS version.
Run it and you'll see something like this:
Note the very generic classifications – petal, flower, sky. The model you created in the previous
codelab was trained to detect 5 varieties of flower, including this one – a daisy.
For the rest of this codelab, you'll look at what it will take to upgrade your app with the custom
model.
1.Open your ViewController.swift file. You may see an error on the ‘import MLKitImageLabeling' at
the top of the file. This is because you removed the generic image labeling libraries when you updated
your pod file.
import MLKitVision
import MLKit
import MLKitImageLabelingCommon
import MLKitImageLabelingCustom
It might be easy to speed read these and think that they're repeating the same code! But it's "Common"
and "Custom" at the end!
2.Next you'll load the custom model that you added in the previous step. Find the getLabels() func.
Beneath the line that reads visionImage.orientation = image.imageOrientation, add these lines:
3.Find the code for specifying the options for the generic ImageLabeler. It's probably giving you an
error since those libraries were removed:
let options = ImageLabelerOptions()
Replace that with this code, to use a CustomImageLabelerOptions, and which specifies the local model:
let options = CustomImageLabelerOptions(localModel: localModel)
...and that's it! Try running your app now! When you try to classify the image it should be more accurate
– and tell you that you're looking at a daisy with high probability!
fig 7.2 showing accuracy of the classification of an image
CONCLUSION
Undertaking a virtual internship program at Google in the area of AI\ML proved to be a
rewarding experience because it enabled me to explore more about machine learning algorithms, neural
networks and how to optimize models. When using TensorFlow, for instance, I was able to put
theoretical concepts into practice and work on projects such as on [specific project/technology].
The need to adjust to virtual working and to work with big data was a challenge, however, with
my mentor and the team, I managed to deal with it. Teamwork and proper communication were vital in
solving the challenges and completing the projects successfully.
This internship has made me realize how much I like AI/ML and I’m looking forward to
[specific area of interest, e.g., natural language processing]. I appreciate my mentor from Google as well
as the AI/ML team for the guidance provided and the encouragement as it has helped me a great deal.
In conclusion, this internship has improved my technical abilities and raised my desire to pursue
a career in AI and machine learning development.