Please use this identifier to cite or link to this item: https://hdl.handle.net/1889/3429
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorMatrella, Guido-
dc.contributor.authorHassannejadh, Hamid-
dc.date.accessioned2017-07-12T13:13:20Z-
dc.date.available2017-07-12T13:13:20Z-
dc.date.issued2017-
dc.identifier.urihttp://hdl.handle.net/1889/3429-
dc.description.abstractFood intake and eating habits have a significant impact on people's health. Widespread diseases, such as diabetes and obesity, are directly related to eating habits. Therefore, monitoring diet can be an effective way of promoting the adoption of a healthy lifestyle and of improving personal and national health economy. Studies have demonstrated that manual reporting of food intake is inaccurate and often impractical. Thus, several methods have been proposed to automate the process. This thesis offers a new approach to monitor food intake based on image analysis. Two separate solutions are developed for food recognition and food portion estimation. Food recognition is done by a very deep convolutional neural network (DCNN) based on the Googl's Inception. A trained version of the is used to overcome lack rather small size of food datasets. Then, the networks is fine-tuned for classifying food images from three well-known food image datasets. The results significantly improve the best published results obtained on the same datasets, while requiring less computation power, since the number of parameters and the computational complexity are much smaller than the competitors'. Because of this, even if it is still rather large, the deep network based on this architecture appears to be at least closer to the requirements for mobile systems. Also, this thesis presents a new approach to food portion estimation using image-based modeling. The modeling method consists of three steps. Firstly, a short video of the food is taken by the user's smartphone. From such a video, six frames are selected based on the pictures' viewpoints as determined by smartphone's orientation sensors. Secondly, the user marks one of the frames to seed an interactive segmentation algorithm. Segmentation is based on a Gaussian Mixture Model alongside the graph-cut algorithm. Finally, a customized image-based modeling algorithm generates a point cloud to model the food. At the same time, a stochastic object-detection method locates a checkerboard used as size/ground reference. The modeling algorithm is optimized to use six images and keep the computation cost acceptable. In our evaluation procedure, we achieved an average accuracy of 92% on a set of images of different kinds of pasta and bread, with a processing time of about 23s.it
dc.language.isoIngleseit
dc.publisherUniversità degli Studi di Parma. Dipartimento di Ingegneria dell'Informazioneit
dc.relation.ispartofseriesDottorato di ricerca in Tecnologie dell'informazioneit
dc.rights© Hamid Hassannejad, 2017it
dc.subjectDiet monitoringit
dc.subjectImage analysisit
dc.subjectFood Imageit
dc.subjectVolume estimationit
dc.titleImage Analysis-Based Food Recognition and Volume Estimation for Diet Monitoringit
dc.typeDoctoral thesisit
dc.subject.miurING-INF/01it
Appears in Collections:Tecnologie dell'informazione. Tesi di dottorato

Files in This Item:
File Description SizeFormat 
TesiDottorato.pdfTesi23.14 MBAdobe PDFView/Open
Final report.pdf
  Until 2100-01-01
Relazione sulle attività svolte nel corso del dottorato61.04 kBAdobe PDFView/Open Request a copy


This item is licensed under a Creative Commons License Creative Commons