Christian Guarini (2018) – Implementation of a matching algorithm on large graphs in multicore and GPU architectures
Relatore: Prof. Pierluigi Ritrovato, Prof. Fatos Xhafa (Universitat Politècnica de Catalunya)
Nowadays graph representations are widely used for dealing with structural information in different domains such as networks, psycho-sociology, image interpretation, pattern recognition, and many others. They allow to describe a set of objects together with their relationships. Analysing these data often requires to measure the similarity between two graphs. Unfortunately, due to its combinatorial nature, the graph matching problem in its various forms has a worst-case complexity that is exponential: as the size of graphs increases the matching process becomes more complex, even if in literature there are several algorithms that show an acceptable execution time, as long as the graphs are not too large and not too dense. Therefore, there is a need of a methodology that could reduce the computational time while matching the graphs. Generally to solve a problem an algorithm is built and implemented as a serial stream of instructions. Only one instruction may execute at a time: after that instruction is finished, the next one is executed. For this reason sequential programs do not gain performance from multicore architectures. On the other hand, parallel computing uses multiple processing elements concurrently to solve a problem. This is accomplished by breaking the problem into independent parts so that each processing element can execute its part of the algorithm concurrently with the others. In this thesis work, performance analyses of a parallel algorithm is done. In particular, we present the parallel version of the sequen- tial subgraph isomorphism algorithm VF3, built using the OpenACC programming standard for parallel computing, developed by Cray, CAPS, Nvidia and PGI, because it is designed to simplify parallel programming in heterogeneous CPU and GPU systems. In order to assess the performance of the parallel version of VF3, it is compared with its sequential version and other two multithread implementations (provided by Ing. V. Carletti), evaluating the differences of the algorithms in terms of execution time, memory consumption and also two performance indexes, i.e. speed-up and efficiency. The tests have been conducted on a large dataset composed by several random graphs (more than 1000) with a size in the range 300 – 10000 nodes, provided by Mivia Lab: it required more than 40 days of machine time. The analysis of the results confirm that the parallel version outperforms the sequential one and, in some cases, also the other two multithread implementations in CPU-only tests, as the graphs become larger and denser. On the other hand when using a GPU for computing, the results are worst, due to the problem formulation. In fact VF3 utilises a DFS strategy to explore the state space: for this reason, due to its sequential characterisation OpenACC APIs, memory and compiler restrictions did not permit to full exploit the GPU calculating capacity to resolve graph matching problems.
Guarda la presentazione
|
Giovanni Galatro (2018) – A kernel-based clustering approach for trends modelling in financial time series
Relatore: Prof. Mario Vento, Prof. N. Petkov (University of Groningen)
Reliable time series prediction and forecasting have many important applications ranging from finance to supply chain management and inventory planning. Stock market prediction is regarded as a challenging task of financial time-series prediction, because of the non-stationary and chaotic nature of data. The Efficient Market Hypothesis (EMH) asserts that the movement of the price of an asset is unpredictable, assuming that this movement evolves as a random walk. Basing on this consideration, trying to do prediction or regression in finance is an impossible task. Traditionally Support Vector Regression, as well as other machine learning algorithms such as Artificial Neural Networks, are used for classification and regression in pattern recognition applications. SVR has the ability to accurately forecast time series data when the underlying system process are typically nonlinear, noisy and non-stationary, while ANN exhibits inconsistent and unpredictable performance on chaotic data. Most works refer to forecasting using different datasets and performance measures, making it difficult to do a comparison. In this thesis, I analyzed whether is possible to apply SVR for the prediction of the future development of financial time series. In all the experiments, I used a smart normalization technique based on the current day for training regressors. The first experiment relates the data model selection between three different approaches. The first approach regards training a regressor on each time series. In the second approach I used multiple simultaneous time series in parallel as regressor input, to investigate if the price of a single stock can be influenced by multiple stocks in the same field of interest (e.g. real estate or banking sector). In the third approach, I trained a single regressor using all the time series, using a consideration coming from the preprocessing method, that highlights trends in data. From the first experiment it has came out that training on all time series performs better respect the other two methods but has some limitations. Considering the possible advantage of using an hybrid solution, I proposed a Two-Stage architecture which makes a clustering of samples before predicting the future values with the dedicated regressor, using different clustering algorithms on the time series along with SVR. Identified clustering issues, it is proposed to use a projection in a larger vector space through an approximation of the RBF Kernel. The performance are evaluated through R2 measures, using the EMH as baseline. I used a 5-fold cross validation (CV) dedicated to non-stationary and non-linear time series for validating the results and for model selection. The results are presented using a degree of confidence, based on the evaluation of the P-Value resulting from T-Student Test. At the end, I analyzed the performance distribution over time to validate the interval of time in which the regressor is reliable. The best result of this thesis is that using a Two-Stage Architecture with Kernel can improve the performance respect the other methods. Hence, I suggest that the “divide-and-conquer” approach is better than the monolithic one and I can conclude that the data can be distributed in a non-convex manner. Although my architecture has good results on average, the standard deviation is high, that shows how the problem is really hard to approach.
|
Antonio Roberto (2018) – A method for forecasting financial time series based on empirical mode decomposition and manifold learning
Relatore: Prof. Mario Vento, Prof. N. Petkov (University of Groningen)
Non-linear time series regression and forecasting are well-known as hard problems which have many applications ranging from finance time series analysis to signal processing, from electric utility load forecasting to natural phenomena prediction. Stock market prediction is considered a challenging task of financial time series prediction. In literature several approaches are proposed for facing these problems. Support Vector Regressor (SVR) has proven to be a good model for the processing of high noise, non-stationary signals. Furthermore, it is known that preprocessing affects the performance of regressors built using this method. Therefore, opportune preprocessing approaches have been developed and applied to clustering and regression tasks. Some transformations to frequency domain have been used like Fourier and Wavelet Transforms. Among these, good results have been obtained using the Empirical Mode Decomposition (EMD) for analyzing financial time series data. Some works are focused on the clustering of the time series in order to group samples with similar statistical distributions to reduce the error due to the non-stationary property. The problem of using SVR and other method based on the learning from examples is that training data can be fitted well by infinite models, but only few of them generalize well on unseen
data. To address this issue, it is desirable to use a large amount of data. Training on samples of different stocks requires the abstraction from the specific time series; most commonly used techniques are whitening and scaling. These methods are influenced by the representativity of the dataset in describing the fundamental parameters (mean, min/max, etc.) which are not constant over the time due to the non-stationarity. Moreover, clustering algorithms use Euclidean distances and, therefore, they are influenced by the way in which data are preprocessed. In this thesis, I conducted two macro-experiments; in the first one, I analyzed how preprocessing affects the forecasting performance proposing a new preprocessing method based on the normalization of the feature vector on the current day (most recent stock price in the vector). Using this approach, I showed that is possible to get away from the specific time series by focusing on the concept of trends modeling. In the second one, I used EMD and Manifold Learning preprocessing methods for the clustering in a two-stage architecture. In this way, I reduced the error contribute due to the non-stationary property using denoised relative changes in the stock price to improve the ability of the clustering algorithms in grouping particular time series dynamics. I evaluated the performance by using the Efficient Market Hypothesis (EMH) to compare performance, measured with 2 and using a 5-fold Cross Validation (CV) specific for time series performance estimation. Taking in the account the statistics nature of the CV, a TStudent Test is used to get a degree of confidence of the obtained results in terms of means. The results evidenced as the proposed normalization outperforms classic methods like the linear scaling and how a preprocessing method based on Empirical Mode Decomposition and the Manifold Learning can increase the performance of the two-stage architecture.
Guarda la presentazione
|
Amedeo Fortino (2018) – A Stereo Matching Algorithm based on Max Trees
Relatore: Prof. Mario Vento, Prof. N. Petkov – Correlatore: Ing. Nicola Strisciuglio, Ing. Antonio Greco (University of Groningen)
|
Francesca Sabatino (2018) – Photovoltaic Panel Detection with Deep Learning Techniques
Relatore: Prof. Mario Vento, Prof. N. Petkov – Correlatore: Ing. Antonio Greco (University of Groningen)
|
Andrea Porfido (2017) – A real-time method for online face re-identification
Relatore: Prof. Mario Vento, Prof. N. Petkov – Correlatore: Prof. Alessia Saggese, Ing. Antonio Greco (University of Groningen)
In last years, face recognition has become one of the most interesting topic in Computer Vision and it has carried many challenges and incentives for the scientific research.
A face recognition system must identify a person analyzing his/her face. The system is trained offline using one or more images of each person and the aim is to identify known persons and reject strangers. This text is about face re-identification: its goal is the same of face recognition, but the main difference is that the database of faces is dynamically populated. So, in addition to a classic face recognition algorithm, the system must be able to track the face of a person and to collect some face images if the person is not already in the knowledge base.
After a brief introduction, in which the main issues of face recognition are presented, the text presents the actual state of the art in face verification, a task simpler than recognition, consisting in verify if two face images belong to the same subject. The section presents the main categories and the more powerful methods, reporting the results on the LFW dataset for each of them. The method “Fisher Vector Face” is extracted from this review and tested for face recognition together with HOG (Histogram of Gradients) and LBP (Local Binary Pattern) descriptors. From the experimentations on various public datasets it turned out that HOG reaches the best performances. Then, the text presents a complete implementation of a face re- identification system, developed in C++ with the OpenCV library and web technologies. The system is able to work in real-time and to perform an online training on new incoming subjects. The last chapter describes possible real-context applications of face re-identification and mentions the future challenges of face analysis.
Guarda la presentazione
|
Raffaella Patronelli (2017) – A real-time trainable method for face recognition
Relatore: Prof. Mario Vento, Prof. N. Petkov – Correlatore: Prof. Alessia Saggese, Ing. Antonio Greco (University of Groningen)
|
Francesco Formato (2017) – Real time age estimation from face images
Relatore: Prof. Mario Vento, Prof. N. Petkov – Correlatore: Prof. Alessia Saggese, Ing. Antonio Greco (University of Groningen)
|
Giovanni De Angelis (2017) – Analysis and experimental evaluation of real time gender recognition algorithms
Relatore: Prof. Mario Vento, Prof. S.A. Tabbone – Correlatore: Prof. Alessia Saggese, Ing. Antonio Greco (Universitè Nancy)
In recent years gender recognition from face images has become a very popular topic, especially in the fields of security and retail. Such interest is mainly due to the various real applications that can be profitably designed.
The aim of this thesis is an experimental analysis of the gender recognition algorithms applied on images acquired in real-time environments. The thesis investigates the performance drop observed when the images acquired in real scenarios are processed and identifies the causes of this collapse.
First the thesis provides an introduction about the gender recognition topic, describing the reasons of the growing interest. Then the document reports a review of the relevant methods proposed to solve the problem of gender recognition from face images. Starting from the state of the art analysis, a new method optimized to perform the gender recognition in real-time is designed and described. In order to perform a performance comparison with other methods, we chose the best commercial gender recognition solutions.
Then the thesis continues with a detailed list of all the datasets (public and private) used to test all the algorithms. The use of both public and private datasets has the aim to demonstrate that most of the available methods achieve an excellent accuracy on public datasets, but the performance dramatically decrease when the methods are evaluated on new images acquired in real-time.
The experimental results section contains all the detailed results of the performance evaluation. They demonstrate the superiority of the proposed method trained on real-time images. The thesis ends with a discussion of the experimental results and a critical explanation of the problems encountered processing real-time images.
|
Gianluigi Mucciolo (2017) – Calibrazione di un sistema di specchi per l’analisi di pazienti affetti da paralisi
Relatore: Prof. Mario Vento, Prof. Walter Kropatsch – Correlatore: Prof. Alessia Saggese (University TU Wien)
|
Leonardo Oliva (2017) – Un algoritmo di video-analisi per il tracking e l’interpretazione del comportamento di uccelli
Relatore: Prof. Mario Vento, Prof. Walter Kropatsch – Correlatore: Prof. Alessia Saggese (University TU Wien)
|
Roberto Pisapia (2016) – Disparity map extraction for a low cost 3D sensor
Relatore: Prof. Mario Vento, Prof. S.A. Tabbone – Correlatore: Prof. Alessia Saggese, Ing. Antonio Greco (Universitè Nancy)
The aim of this thesis is the development of a new low cost sensor for the 3D acquisition. The 3D sensor provides several features, like a tool for initial configuration of the sensor, the synchronized acquisition from both the cameras, the rectification of the captured images and the processing of the image to get a range map used in many different applications.
Given the presence of high cost devices, which allow to obtain the three-dimensional representation of the environment taken into consideration, the purpose of this work is to realize a low cost sensor, that makes possible the stereo acquisition, and to produce a depth image from the disparity map. This sensor can be used for counting and classification of people, or for a 3D reconstruction of the environment under consideration.
The 3D sensor consists of two cameras in a PVC container, connected via USB to Raspberry Pi 2, which handles the video stream acquisition and the image processing.
The first step is the assembly of the used components; particular attention was drawn to the layout of the cameras, to avoid misalignment that can negatively influence the result of the disparity map extraction. Although the two aligned cameras, the initial calibration is always necessary, to eliminate the radial distortions related to inherent defects of the camera lenses. For this initial calibration, it is provided a simple tool, which allows, with the aid of a chessboard pattern, obtaining the calibration parameters, which are used to remove the distortion and possible misalignments between the cameras.
For the acquisition and decoding of the video stream from the cameras, we use the library FFMPEG, which gives the opportunity to acquire individual video streams and then decode them in the desired format. One of the features provided by the sensor is the synchronous acquisition from the cameras, transforming two autonomous cameras in a real stereo camera.
An additional available feature is the rectification of stereo images, which, using the parameters obtained from the calibration performed during the configuration of the cameras, allows to remove the radial distortion caused by lens imperfections of the camera.
The used algorithm, in order to obtain a depth image, is a generative probabilistic model for stereo matching, which allows dense matching with small aggregation windows by reducing ambiguities on the correspondences.
The employed approach requires a prior compute of the disparity space by forming a triangulation on a set of robustly matched correspondences, named “support points”. The implemented algorithm does not suffer in presence of poorly-texture and slanted surfaces; therefore, it suits at more examined environments. Also, to increase the performance, much part of code was parallelized, using the SIMD instruction set NEON, available on Raspberry CPU.
The last step of this study was to calculate the framerate of the stereo camera, settled on 4 fps, and the error rate of the disparity map calculation, using the Middlebury Stereo Dataset, measuring 27.3% of error.
The future development of this work is to complete and to make the sensor 3D a plug and play sensor, usable immediately with a rough configuration. Furthermore, updating the using hardware with a new and more powerful CPU, the potentialities of the sensor may be increased and improved.
|
Antonio Terrone (2016) – People counting and height estimation from overhead depth images
Relatore: Prof. Mario Vento, Prof. S.A. Tabbone – Correlatore: Prof. Alessia Saggese, Ing. Antonio Greco (Universitè Nancy)
Counting automatically the number of people passing a specific point is a function of paramount importance in applications such as surveillance, monitoring, and interaction between humans and machines. There are some real scenarios where this application is really interesting: estimating the crowd density in public places can help managers identify unsafe situations and regulate traffic appropriately; or, the public museum can control the number of people entering according to the real-time people flow information. Also the retail domain represents one of the most typical scenarios where such systems are used. Indeed, information regarding the number of persons that are present in a shop or in a mall is very relevant.
Many methods were proposed in the literature to reach this goal but these approaches can achieve some success in specific situations. However, due to the occlusions, variation of illumination, color and texture, the problem is far from being solved in all the real scenarios, such as buses, crowded scenes, trains and so on.
The vertical depth information generated by Kinect sensor can simplify the people counting problem, but there are still problems in real applications. This is due to the fact that people in the same scene may have various scales or depth information, and the crowded people will make a complex depth map with multiple local extrema. Besides, the raw 3D data from Kinect sensor have a lot of noise, which makes the depth map to be discontinuous. So it is difficult to achieve good performance with the traditional clustering methods such as mean shift.
With this paper instead, and the results confirm it, the people counting achieve an impressive accuracy in different challenging real scenarios. We use vertical camera since occlusion problem is automatically solved and we can use this to analyze depth images for people counting and to estimate their heights. The proposed method is based on the detection of the heads that allows to achieve these two results. This idea is based on the fact that with the zenithal depth camera the head is always closer to the sensor than other parts of the body and so it is possible to exploit this feature for people counting. Detecting people’s head equals to finding the suitable local minimum regions in the depth image.
The proposed method used to find the minimum local region is the simulation of the water filling process, in that the water moves away from the heaven and out to the nearby hollow under the force of gravity. The raindrops fall down from sky to the land, and then they will find their way to the low place. So we can see the depth map as a land, with humps and hollows, and the value of depth represent the height. With this method, we find the head of a person and so, tracking this, it is possible to count and estimate the height.
To obtain good results and to achieve the goal, the algorithm consists of various steps.
Before to apply the method that returns these regions, we make some morphological operations of pre-processing on the image. In particular, the most important operation is a threshold based on the distance and on the idea that everything is very close to the floor is not important for the final result.
The second step is the central algorithm that allows to obtain an image full of water in the points of interest and so, in this case, the heads (the minimum local regions).
The third phase consists in some operation of post-processing: in particular the opening operation (it is obtained by the erosion of an image followed by a dilation) is carried out with the aim to filter the noise and to leave on the scene only the heads.
In the fourth step we use the tracking module that is very important for people counting and detection of the crossing direction. In fact, in this step, each object is associated with an ID in order to be identified and with a rectangle that includes it.
After the detection, in the fifth step, it is possible to calculate the height of each person, using the information of the distance between the camera and the head of each person, that was obtained in the previous steps.
In the last phase, the final counting is carried out: the idea is to verify if the points of the rectangle associated with the object cross the line that identifies the sensor for the counting. Each person can cross this line in bidirectional way and each person is counted when at least three of the five points of the rectangle (four corners and the center) move from a side to the other side.
The algorithm returns a file containing the list of the object detected and in which direction occurs the crossing.
We used a dataset composed of two types of video: indoor and outdoor. Each type of these classes of video is further divided in other two classes: with the crossing of single person and crossing of groups. The algorithm was evaluated on this dataset and the results compared with the Groundtruth produced on the same dataset but with another method present in literature. The performance indices of Precision, Recall and F-Score was computed evaluating the number of TP, FP, FN. We also analyze the computational load of the people counting method, profiling all the algorithm steps on all the videos in the dataset.
From the comparison, we noticed that this algorithm achieves a higher accuracy than the other method. The results show that, with this method on this complete dataset (every video) it has these final and overall values:
• Total Precision: 0.99;
• Total Recall: 0.98;
• Total F-Score: 0.99.
The future works on this thesis could be focused on the improvement of pre and post processing to obtain even fewer False Positives and False Negatives, perhaps using the concept of ROI or other techniques.
|
Carmine Sansone (2016) – An algorithm for tracking swimming worms
Relatore: Prof. Mario Vento, Prof. Walter Kropatsch – Correlatore: Prof. Alessia Saggese (University TU Wien)
|
Marco Gerardi (2016) – Comparative analysis of features for live audio events detection
Relatore: Prof. Mario Vento, Prof. Nicolai Petkov – Correlatore: Ing. Nicola Strisciuglio (University of Groningen)
|
Carmine Volpe (2016) – Moving objects detection and tracking from video captured by moving camera
Relatore: Prof. Mario Vento, Prof. Xiaoyi Jiang – Correlatore: Ing. Antonio Greco (University of Munster)
|
Dario De Rosa (2015) – Graph Matching per la ricerca di pattern in database biologici: stima delle performance ed ottimizzazioni
Relatore: Prof. Pasquale Foggia, Prof. Xiaoyi Jiang – Correlatore: Prof. Mario Vento (University of Munster)
L’obiettivo della Tesi è di progettare un sistema che possa predire quale sia l’algoritmo di Graph Matching migliore che possa risolvere il problema di cercare tutti i possibili match di un grafo pattern all’interno di un grafo target. Il sistema è calato in un contesto in cui i grafi modellano tipiche strutture biologiche come Molecole, Proteine e Contact Map di Proteine. L’idea principale è quella di estrarre dai grafi particolari features strutturali così che sia possibile infine addestrare una Support Vector Machine. Durante la fase di test, per rendere statisticamente significativi i risultati del sistema, si è scelto di ampliare il database di grafi “Bio” messo a disposizione dal laboratorio Mivia, creato in occasione del Contest “Graph Matching Algorithms for Pattern Search in Biological Databases” tenuto a Stoccolma nell’Agosto del 2014.
|
Gennaro Finizio (2015) – Rilevazione automatica di cellule in imamagini HEp-2
Relatore: Prof. Mario Vento, Prof. Xiaoyi Jiang – Correlatore: Prof. Gennaro Percannella (University of Munster)
|
Mario Speranza (2015) – A method for action recognition using depth images
Relatore: Prof. Pasquale Foggia, Prof. Xiaoyi Jiang – Correlatore: Prof. Mario Vento (University of Munster)
|
Nicola Apicella (2015) – Detection of hazardous situations in roads by audio analysis
Relatore: Prof. Mario Vento, Prof. Nicolai Petkov – Correlatore: Ing. Nicola Strisciuglio (University of Groningen)
|
Raffaele Cafaro (2015) – Audio events detection using trainable filters
Relatore: Prof. Mario Vento, Prof. Nicolai Petkov – Correlatore: Ing. Nicola Strisciuglio (University of Groningen)
|
Antonio Duca (2015) – Responsive UI Design for liquid web software
Relatore: Prof. Mario Vento – Correlatore: Prof. Gennaro Percannella (University of Tampere)
|
Francesco De Martino (2015) – Human Skeletal action recognition with S-Cosfire filters
Relatore: Prof. Mario Vento, Prof. Nicolai Petkov – Correlatore: Ing. Alessia Saggese, Ing. Nicola Strisciuglio (University of Groningen)
Human motion analysis is nowadays one of the most important and challenging topic in computer vision. It consists in the automatic detection and classification of actions by using information acquired from different kinds of sensor, such as RGB cameras, range sensors or marker based systems.
In this thesis an innovative trainable skeletal pattern descriptor, named S-COSFIRE, is proposed. The S-COSFIRE filter provides a new representation of body parts spatial configuration, which is obtained automatically by training the filter from samples. This representation is then enriched and processed at a higher level to take into account the temporal evolution that characterizes the movement.
The developed algorithm has been tested on three different public datasets providing results that are comparable with the best state-of-the-art proposed solutions. (Obtained accuracy: MHAD 97.4% – MIVIA-S 95% – MSRDA 68%)
|
Andrea Iuliano (2014) – A Kernel based approach for the recognition of human actions
Relatore: Prof. Mario Vento, Prof. Luc Brun – Correlatore: Ing. Alessia Saggese (ENSICAEN)
|
Benito Cappellania (2014) – Detection of anomalous human behaviors by graph based representation
Relatore: Prof. Mario Vento, Prof. Luc Brun – Correlatore: Ing. Alessia Saggese (ENSICAEN)
|
Walter Petretta (2014) – Improving the reliability of an audio surveillance system robust to high variation of the SNR
Relatore: Prof. Mario Vento, Prof. Nicolai Petkov – Correlatore: Ing. Nicola Strisciuglio (University of Groningen)
|
Roberto Pacilio (2013) – Feasibility study of a multi-camera video analysis system for “Mondeville 2 Shopping Centre” of Caen
Relatore: Prof. Mario Vento, Prof. Luc Brun – Correlatore: Ing. Alessia Saggese (ENSICAEN – Universitè de Caen Basse Normandie)
Il progetto focalizza l’attenzione sulla realizzabilità di un sistema multi-camera di video analisi volto a supportare decisioni strategiche per il marketing all’interno del centro commerciale Mondeville 2 di Caen.
All’interno della scena globale, ricostruita utilizzando le immagini acquisite dalle singole camere, si identificano, tramite un algoritmo di tracking, i principali flussi di clienti nell’area di interesse ed infine si estraggono informazioni salienti riguardanti i punti caldi e freddi nel centro commerciale.
|
Giuseppe D’Alessio (2013) – Forecasting of trend change in stock markets: cosfire and reverse correlation approach
Relatore: Prof. Mario Vento, Prof. Nicolai Petkov – Correlatore: Prof. Pasquale Foggia (University of Groningen)
|
Vincenzo De Notaris (2013) – A multi-camera tracking algorithm devised for business intelligence application in a shopping centre by visual data
Relatore: Prof. Mario Vento, Prof. Luc Brun – Correlatore: Ing. Alessia Saggese (ENSICAEN – Universitè de Caen Basse Normandie)
Il progetto focalizza l’attenzione sulla realizzabilità di un sistema multi-camera di video analisi volto a supportare decisioni strategiche per il marketing all’interno del centro commerciale Mondeville 2 di Caen.
All’interno della scena globale, ricostruita utilizzando le immagini acquisite dalle singole camere, si identificano, tramite un algoritmo di tracking, i principali flussi di clienti nell’area di interesse ed infine si estraggono informazioni salienti riguardanti i punti caldi e freddi nel centro commerciale.
|
Francesco Maria Lettieri (2013) – An holistic approach to IIF images classification using rotation invariant CoALBP
Relatore: Prof. Mario Vento – Correlatore: Prof. Gennaro Percannella (University of Groningen)
Il rilevamento di anticorpi antinucleari tramite immagini IIF (indirect immunofluorescence) è una parte importante dell’immunologia e della medicina clinica. Esistono più di 35 pattern differenti che possiamo trovare in un IFA (immunofluorescence assay) descritti dai medici specialisti che se correttamente classificati possono fornire un importante contributo nella diagnosi di alcune malattie. Questi pattern si possono dividere in sei macro classi: Nucleolare, Centromerica, Citoplasmatica, Punteggiata fine, Punteggiata grossa e Omogenea. In questo lavoro si propone un approccio olistico per risolvere questo problema di classificazione usando la visione artificiale. Esso migliora uno dei metodi esistenti, quello di Nosaka.
|
Nicola Strisciuglio (2012) – Segmentation of blood vessels in retinal images using CORF operator
Relatore: Prof. Mario Vento – Correlatore: Prof. Pasquale Foggia (University of Groningen)
La segmentazione automatica dei vasi sanguigni nelle immagini della retina rappresenta il primo step di un sistema di diagnostica per retinopatie. E’ considerato un compito difficile a causa della variabilità delle dimensioni dei vasi sanguigni e della loro tortuosità, ma può aiutare gli specialisti ad effettuare un maggiore numero di idagnosi.
METODO:
In questo lavoro, viene proposto un metodo innovativo per la segmentazione dei vasi sanguigni, basato sul modello computazionale COmbination of Receptive Fields (CORF). CORF è un modello delle cellule semplici dell’area V1 del sistema visivo umano, che hanno il compito di identificare linee e contorni.
RISULTATI:
Le prestazioni dell’algoritmo sono state valutate su due data-set, DRIVE e STARE. L’accuracy e l’area sottesa alla curva ROC sono risultate superiori a quelle di molti altri metodi pubblicati per entrambi i data-set.
|
Vincenzo Capozzoli (2012) – Camera Calibration by keypoint detection using COSFIRE filter
Relatore: Prof. Mario Vento – Correlatore: Prof. Pasquale Foggia (University of Groningen)
La perdita di informazione che si ha quando si proietta una scena del mondo reale in un’immagine rende questa attività non banale. Il recupero di tali informazioni risulta essere utile in molte applicazioni in ambiti che vanno dai robot con telecamere alla biomedicina.
La camera calibration è stata realizzata mediante il metodo di Zhang e facendo dell’innovativo COSFIRE detector. COSFIRE è un keypoint detector ispirato al funzionamento di un particolare tipo di cellule dell’area V4 della corteccia visiva.
Il nostro algoritmo ha ottenuto prestazioni due volte superiori ai corner detector utilizzati in letteratura per le calibrazioni basate sul metodo di Zhang.
|