2016 |
Gaetano Manzo; Francesc Serratosa; Mario Vento Online human assisted and cooperative pose estimation of 2D cameras Journal Article Expert Systems with Applications, 60 , pp. 258–268, 2016. Abstract | BibTeX | Tag: Image analysis and recognition | Links: @article{Manzo2016, title = {Online human assisted and cooperative pose estimation of 2D cameras}, author = {Gaetano Manzo and Francesc Serratosa and Mario Vento}, url = {http://www.sciencedirect.com/science/article/pii/S0957417416302305}, doi = {10.1016/j.eswa.2016.05.012}, year = {2016}, date = {2016-10-30}, journal = {Expert Systems with Applications}, volume = {60}, pages = {258–268}, abstract = {Autonomous robots performing cooperative tasks need to know the relative pose of the other robots in the fleet. Deducing these poses might be performed through structure from motion methods in the applications where there are no landmarks or GPS, for instance, in non-explored indoor environments. Structure from motion is a technique that deduces the pose of cameras only given only the 2D images. This technique relies on a first step that obtains a correspondence between salient points of images. For this reason, the weakness of this method is that poses cannot be estimated if a proper correspondence is not obtained due to low quality of the images or images that do not share enough salient points. We propose, for the first time, an interactive structure-from-motion method to deduce the pose of 2D cameras. Autonomous robots with embedded cameras have to stop when they cannot deduce their position because the structure-from-motion method fails. In these cases, a human interacts by simply mapping a pair of points in the robots’ images. Performing this action the human imposes the correct correspondence between them. Then, the interactive structure from motion is capable of deducing the robots’ lost positions and the fleet of robots can continue their high level task. From the practical point of view, the interactive method allows the whole system to achieve more complex tasks in more complex environments since the human interaction can be seen as a recovering or a reset process.}, keywords = {Image analysis and recognition}, pubstate = {published}, tppubtype = {article} } Autonomous robots performing cooperative tasks need to know the relative pose of the other robots in the fleet. Deducing these poses might be performed through structure from motion methods in the applications where there are no landmarks or GPS, for instance, in non-explored indoor environments. Structure from motion is a technique that deduces the pose of cameras only given only the 2D images. This technique relies on a first step that obtains a correspondence between salient points of images. For this reason, the weakness of this method is that poses cannot be estimated if a proper correspondence is not obtained due to low quality of the images or images that do not share enough salient points. We propose, for the first time, an interactive structure-from-motion method to deduce the pose of 2D cameras. Autonomous robots with embedded cameras have to stop when they cannot deduce their position because the structure-from-motion method fails. In these cases, a human interacts by simply mapping a pair of points in the robots’ images. Performing this action the human imposes the correct correspondence between them. Then, the interactive structure from motion is capable of deducing the robots’ lost positions and the fleet of robots can continue their high level task. From the practical point of view, the interactive method allows the whole system to achieve more complex tasks in more complex environments since the human interaction can be seen as a recovering or a reset process. |
Alessia Saggese; Nicola Strisciuglio; Mario Vento; Nicolai Petkov Time-frequency analysis for audio event detection in real scenarios Inproceedings 2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 438-443, 2016. Abstract | BibTeX | Tag: | Links: @inproceedings{7738082, title = {Time-frequency analysis for audio event detection in real scenarios}, author = {Alessia Saggese and Nicola Strisciuglio and Mario Vento and Nicolai Petkov}, doi = {10.1109/AVSS.2016.7738082}, year = {2016}, date = {2016-08-01}, booktitle = {2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)}, pages = {438-443}, abstract = {We propose a sound analysis system for the detection of audio events in surveillance applications. The method that we propose combines short- and long-time analysis in order to increase the reliability of the detection. The basic idea is that a sound is composed of small, atomic audio units and some of them are distinctive of a particular class of sounds. Similarly to the words in a text, we count the occurrence of audio units for the construction of a feature vector that describes a given time interval. A classifier is then used to learn which audio units are distinctive for the different classes of sound. We compare the performance of different sets of short-time features by carrying out experiments on the MIVIA audio event data set. We study the performance and the stability of the proposed system when it is employed in live scenarios, so as to characterize its expected behavior when used in real applications.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } We propose a sound analysis system for the detection of audio events in surveillance applications. The method that we propose combines short- and long-time analysis in order to increase the reliability of the detection. The basic idea is that a sound is composed of small, atomic audio units and some of them are distinctive of a particular class of sounds. Similarly to the words in a text, we count the occurrence of audio units for the construction of a feature vector that describes a given time interval. A classifier is then used to learn which audio units are distinctive for the different classes of sound. We compare the performance of different sets of short-time features by carrying out experiments on the MIVIA audio event data set. We study the performance and the stability of the proposed system when it is employed in live scenarios, so as to characterize its expected behavior when used in real applications. |
Luca Greco; Pierluigi Ritrovato; Alessia Saggese; Mario Vento Improving reliability of people tracking by adding semantic reasoning Inproceedings 2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 194-199, 2016. Abstract | BibTeX | Tag: | Links: @inproceedings{7738025, title = {Improving reliability of people tracking by adding semantic reasoning}, author = {Luca Greco and Pierluigi Ritrovato and Alessia Saggese and Mario Vento}, doi = {10.1109/AVSS.2016.7738025}, year = {2016}, date = {2016-08-01}, booktitle = {2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)}, pages = {194-199}, abstract = {Even the best performing object tracking algorithm on well known datasets, commits several errors that prevent a concrete adoption in real case scenarios unless you do not accept some compromise about tracking quality and reliability. The aim of this paper is to demonstrate that adding to a traditional object tracking solution a knowledge based reasoner build on top of semantic web technologies, it is possible to identify and properly manage common tracking problems. The proposed approach has been evaluated using View 001 and View 003 of the PETS2009 dataset with interesting results.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Even the best performing object tracking algorithm on well known datasets, commits several errors that prevent a concrete adoption in real case scenarios unless you do not accept some compromise about tracking quality and reliability. The aim of this paper is to demonstrate that adding to a traditional object tracking solution a knowledge based reasoner build on top of semantic web technologies, it is possible to identify and properly manage common tracking problems. The proposed approach has been evaluated using View 001 and View 003 of the PETS2009 dataset with interesting results. |
George Azzopardi; Antonio Greco; Mario Vento Gender recognition from face images with trainable COSFIRE filters Inproceedings 2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 235-241, 2016. Abstract | BibTeX | Tag: | Links: @inproceedings{7738068, title = {Gender recognition from face images with trainable COSFIRE filters}, author = {George Azzopardi and Antonio Greco and Mario Vento}, doi = {10.1109/AVSS.2016.7738068}, year = {2016}, date = {2016-08-01}, booktitle = {2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)}, pages = {235-241}, abstract = {Gender recognition from face images is an important application in the fields of security, retail advertising and marketing. We propose a novel descriptor based on COSFIRE filters for gender recognition. A COSFIRE filter is trainable, in that its selectivity is determined in an automatic configuration process that analyses a given prototype pattern of interest. We demonstrate the effectiveness of the proposed approach on a new dataset called GENDER-FERET with 474 training and 472 test samples and achieve an accuracy rate of 93.7%. It also outperforms an approach that relies on handcrafted features and an ensemble of classifiers. Furthermore, we perform another experiment by using the images of the Labeled Faces in the Wild (LFW) dataset to train our classifier and the test images of the GENDER-FERET dataset for evaluation. This experiment demonstrates the generalization ability of the proposed approach and it also outperforms two commercial libraries, namely Face++ and Luxand.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Gender recognition from face images is an important application in the fields of security, retail advertising and marketing. We propose a novel descriptor based on COSFIRE filters for gender recognition. A COSFIRE filter is trainable, in that its selectivity is determined in an automatic configuration process that analyses a given prototype pattern of interest. We demonstrate the effectiveness of the proposed approach on a new dataset called GENDER-FERET with 474 training and 472 test samples and achieve an accuracy rate of 93.7%. It also outperforms an approach that relies on handcrafted features and an ensemble of classifiers. Furthermore, we perform another experiment by using the images of the Labeled Faces in the Wild (LFW) dataset to train our classifier and the test images of the GENDER-FERET dataset for evaluation. This experiment demonstrates the generalization ability of the proposed approach and it also outperforms two commercial libraries, namely Face++ and Luxand. |
Danilo Cavaliere; Sabrina Senatore; Mario Vento; Vincenzo Loia Towards semantic context-aware drones for aerial scenes understanding Journal Article 2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 00 (undefined), pp. 115-121, 2016. Abstract | BibTeX | Tag: | Links: @article{10.1109/AVSS.2016.7738062, title = {Towards semantic context-aware drones for aerial scenes understanding}, author = {Danilo Cavaliere and Sabrina Senatore and Mario Vento and Vincenzo Loia}, doi = {doi.ieeecomputersociety.org/10.1109/AVSS.2016.7738062}, year = {2016}, date = {2016-01-01}, journal = {2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)}, volume = {00}, number = {undefined}, pages = {115-121}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, abstract = {Visual object tracking with unmanned aerial vehicles (UAVs) plays a central role in the aerial surveillance. Reliable object detection depends on many factors such as large displacements, occlusions, image noise, illumination and pose changes or image blur that may compromise the object labeling. The paper presents a proposal for a hybrid solution that adds semantic information to the video tracking processing: along with the tracked objects, the scene is completely depicted by data from places, natural features, or in general Points of Interest (POIs). Each scene from a video sequence is semantically described by ontological statements which, by inference, support the object identification which often suffers from some weakness in the object tracking methods. The synergy between the tracking methods and semantic technologies seems to bridge the object labeling gap, enhance the understanding of the situation awareness, as well as critical alarming situations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visual object tracking with unmanned aerial vehicles (UAVs) plays a central role in the aerial surveillance. Reliable object detection depends on many factors such as large displacements, occlusions, image noise, illumination and pose changes or image blur that may compromise the object labeling. The paper presents a proposal for a hybrid solution that adds semantic information to the video tracking processing: along with the tracked objects, the scene is completely depicted by data from places, natural features, or in general Points of Interest (POIs). Each scene from a video sequence is semantically described by ontological statements which, by inference, support the object identification which often suffers from some weakness in the object tracking methods. The synergy between the tracking methods and semantic technologies seems to bridge the object labeling gap, enhance the understanding of the situation awareness, as well as critical alarming situations. |
Pasquale Foggia; Nicolai Petkov; Alessia Saggese; Nicola Strisciuglio; Mario Vento Audio Surveillance of Roads: A System for Detecting Anomalous Sounds Journal Article IEEE Transactions on Intelligent Transportation Systems, 17 (1), pp. 279-288, 2016, ISSN: 1524-9050. Abstract | BibTeX | Tag: | Links: @article{7321013, title = {Audio Surveillance of Roads: A System for Detecting Anomalous Sounds}, author = {Pasquale Foggia and Nicolai Petkov and Alessia Saggese and Nicola Strisciuglio and Mario Vento}, doi = {10.1109/TITS.2015.2470216}, issn = {1524-9050}, year = {2016}, date = {2016-01-01}, journal = {IEEE Transactions on Intelligent Transportation Systems}, volume = {17}, number = {1}, pages = {279-288}, abstract = {In the last decades, several systems based on video analysis have been proposed for automatically detecting accidents on roads to ensure a quick intervention of emergency teams. However, in some situations, the visual information is not sufficient or sufficiently reliable, whereas the use of microphones and audio event detectors can significantly improve the overall reliability of surveillance systems. In this paper, we propose a novel method for detecting road accidents by analyzing audio streams to identify hazardous situations such as tire skidding and car crashes. Our method is based on a two-layer representation of an audio stream: at a low level, the system extracts a set of features that is able to capture the discriminant properties of the events of interest, and at a high level, a representation based on a bag-of-words approach is then exploited in order to detect both short and sustained events. The deployment architecture for using the system in real environments is discussed, together with an experimental analysis carried out on a data set made publicly available for benchmarking purposes. The obtained results confirm the effectiveness of the proposed approach.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In the last decades, several systems based on video analysis have been proposed for automatically detecting accidents on roads to ensure a quick intervention of emergency teams. However, in some situations, the visual information is not sufficient or sufficiently reliable, whereas the use of microphones and audio event detectors can significantly improve the overall reliability of surveillance systems. In this paper, we propose a novel method for detecting road accidents by analyzing audio streams to identify hazardous situations such as tire skidding and car crashes. Our method is based on a two-layer representation of an audio stream: at a low level, the system extracts a set of features that is able to capture the discriminant properties of the events of interest, and at a high level, a representation based on a bag-of-words approach is then exploited in order to detect both short and sustained events. The deployment architecture for using the system in real environments is discussed, together with an experimental analysis carried out on a data set made publicly available for benchmarking purposes. The obtained results confirm the effectiveness of the proposed approach. |
George Azzopardi; Antonio Greco; Mario Vento Gender Recognition from Face Images Using a Fusion of SVM Classifiers Book Chapter Campilho, Aurélio; Karray, Fakhri (Ed.): Image Analysis and Recognition: 13th International Conference, ICIAR 2016, in Memory of Mohamed Kamel, P'ovoa de Varzim, Portugal, July 13-15, 2016, Proceedings, pp. 533–538, Springer International Publishing, Cham, 2016, ISBN: 978-3-319-41501-7. Abstract | BibTeX | Tag: | Links: @inbook{Azzopardi2016, title = {Gender Recognition from Face Images Using a Fusion of SVM Classifiers}, author = {George Azzopardi and Antonio Greco and Mario Vento}, editor = {Aurélio Campilho and Fakhri Karray}, url = {http://dx.doi.org/10.1007/978-3-319-41501-7_59}, doi = {10.1007/978-3-319-41501-7_59}, isbn = {978-3-319-41501-7}, year = {2016}, date = {2016-01-01}, booktitle = {Image Analysis and Recognition: 13th International Conference, ICIAR 2016, in Memory of Mohamed Kamel, P'ovoa de Varzim, Portugal, July 13-15, 2016, Proceedings}, pages = {533--538}, publisher = {Springer International Publishing}, address = {Cham}, abstract = {The recognition of gender from face images is an important application, especially in the fields of security, marketing and intelligent user interfaces. We propose an approach to gender recognition from faces by fusing the decisions of SVM classifiers. Each classifier is trained with different types of features, namely HOG (shape), LBP (texture) and raw pixel values. For the latter features we use an SVM with a linear kernel and for the two former ones we use SVMs with histogram intersection kernels. We come to a decision by fusing the three classifiers with a majority vote. We demonstrate the effectiveness of our approach on a new dataset that we extract from FERET. We achieve an accuracy of 92.6 %, which outperforms the commercial products Face++ and Luxand.}, keywords = {}, pubstate = {published}, tppubtype = {inbook} } The recognition of gender from face images is an important application, especially in the fields of security, marketing and intelligent user interfaces. We propose an approach to gender recognition from faces by fusing the decisions of SVM classifiers. Each classifier is trained with different types of features, namely HOG (shape), LBP (texture) and raw pixel values. For the latter features we use an SVM with a linear kernel and for the two former ones we use SVMs with histogram intersection kernels. We come to a decision by fusing the three classifiers with a majority vote. We demonstrate the effectiveness of our approach on a new dataset that we extract from FERET. We achieve an accuracy of 92.6 %, which outperforms the commercial products Face++ and Luxand. |
Carmine Sansone; Daniel Pucher; Nicole M. Artner; Walter G. Kropatsch; Alessia Saggese; Mario Vento Shape Normalizing and Tracking Dancing Worms Book Chapter Robles-Kelly, Antonio; Loog, Marco; Biggio, Battista; Escolano, Francisco; Wilson, Richard (Ed.): Structural, Syntactic, and Statistical Pattern Recognition: Joint IAPR International Workshop, S+SSPR 2016, M'erida, Mexico, November 29 - December 2, 2016, Proceedings, pp. 390–400, Springer International Publishing, Cham, 2016, ISBN: 978-3-319-49055-7. Abstract | BibTeX | Tag: | Links: @inbook{Sansone2016, title = {Shape Normalizing and Tracking Dancing Worms}, author = {Carmine Sansone and Daniel Pucher and Nicole M. Artner and Walter G. Kropatsch and Alessia Saggese and Mario Vento}, editor = {Antonio Robles-Kelly and Marco Loog and Battista Biggio and Francisco Escolano and Richard Wilson}, url = {http://dx.doi.org/10.1007/978-3-319-49055-7_35}, doi = {10.1007/978-3-319-49055-7_35}, isbn = {978-3-319-49055-7}, year = {2016}, date = {2016-01-01}, booktitle = {Structural, Syntactic, and Statistical Pattern Recognition: Joint IAPR International Workshop, S+SSPR 2016, M'erida, Mexico, November 29 - December 2, 2016, Proceedings}, pages = {390--400}, publisher = {Springer International Publishing}, address = {Cham}, abstract = {During spawning, the marine worms Platynereis dumerilii exhibit certain swimming behaviors, which are described as nuptial dance. To address the hypothesis that characteristic male and female spawning behaviors are required for successful spawning and fertilization, we propose a 2D tracking approach enabling the extraction of spatio-temporal data to quantify gender-specific behaviors. One of the main issues is the complex interaction between the worms leading to collisions, occlusions, and interruptions of their continuous trajectories. To maintain the individual identities under these challenging interactions a combined tracking and re-identification approach is proposed. The re-identification is based on a set of features, which take into account position, shape and appearance of the worms. These features include the normalized shape of a worm, which is computed using a novel approach based on its distance transform and skeleton.}, keywords = {}, pubstate = {published}, tppubtype = {inbook} } During spawning, the marine worms Platynereis dumerilii exhibit certain swimming behaviors, which are described as nuptial dance. To address the hypothesis that characteristic male and female spawning behaviors are required for successful spawning and fertilization, we propose a 2D tracking approach enabling the extraction of spatio-temporal data to quantify gender-specific behaviors. One of the main issues is the complex interaction between the worms leading to collisions, occlusions, and interruptions of their continuous trajectories. To maintain the individual identities under these challenging interactions a combined tracking and re-identification approach is proposed. The re-identification is based on a set of features, which take into account position, shape and appearance of the worms. These features include the normalized shape of a worm, which is computed using a novel approach based on its distance transform and skeleton. |
Nicola Strisciuglio; George Azzopardi; Mario Vento; Nicolai Petkov Supervised vessel delineation in retinal fundus images with the automatic selection of B-COSFIRE filters Journal Article Machine Vision and Applications, 27 (8), pp. 1137–1149, 2016, ISSN: 1432-1769. Abstract | BibTeX | Tag: | Links: @article{Strisciuglio2016, title = {Supervised vessel delineation in retinal fundus images with the automatic selection of B-COSFIRE filters}, author = {Nicola Strisciuglio and George Azzopardi and Mario Vento and Nicolai Petkov}, url = {http://dx.doi.org/10.1007/s00138-016-0781-7}, doi = {10.1007/s00138-016-0781-7}, issn = {1432-1769}, year = {2016}, date = {2016-01-01}, journal = {Machine Vision and Applications}, volume = {27}, number = {8}, pages = {1137--1149}, abstract = {The inspection of retinal fundus images allows medical doctors to diagnose various pathologies. Computer-aided diagnosis systems can be used to assist in this process. As a first step, such systems delineate the vessel tree from the background. We propose a method for the delineation of blood vessels in retinal images that is effective for vessels of different thickness. In the proposed method, we employ a set of B-COSFIRE filters selective for vessels and vessel-endings. Such a set is determined in an automatic selection process and can adapt to different applications. We compare the performance of different selection methods based upon machine learning and information theory. The results that we achieve by performing experiments on two public benchmark data sets, namely DRIVE and STARE, demonstrate the effectiveness of the proposed approach.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The inspection of retinal fundus images allows medical doctors to diagnose various pathologies. Computer-aided diagnosis systems can be used to assist in this process. As a first step, such systems delineate the vessel tree from the background. We propose a method for the delineation of blood vessels in retinal images that is effective for vessels of different thickness. In the proposed method, we employ a set of B-COSFIRE filters selective for vessels and vessel-endings. Such a set is determined in an automatic selection process and can adapt to different applications. We compare the performance of different selection methods based upon machine learning and information theory. The results that we achieve by performing experiments on two public benchmark data sets, namely DRIVE and STARE, demonstrate the effectiveness of the proposed approach. |
Luc Brun; Gennaro Percannella; Alessia Saggese; Mario Vento Action recognition by using kernels on aclets sequences Journal Article Computer Vision and Image Understanding, 144 , pp. 3 - 13, 2016, ISSN: 1077-3142, (Individual and Group Activities in Video Event Analysis). Abstract | BibTeX | Tag: Soft assignment | Links: @article{Brun20163, title = {Action recognition by using kernels on aclets sequences}, author = {Luc Brun and Gennaro Percannella and Alessia Saggese and Mario Vento}, url = {http://www.sciencedirect.com/science/article/pii/S1077314215001988}, doi = {http://dx.doi.org/10.1016/j.cviu.2015.09.003}, issn = {1077-3142}, year = {2016}, date = {2016-01-01}, journal = {Computer Vision and Image Understanding}, volume = {144}, pages = {3 - 13}, abstract = {In this paper we propose a method for human action recognition based on a string kernel framework. An action is represented as a string, where each symbol composing it is associated to an aclet, that is an atomic unit of the action encoding a feature vector extracted from raw data. In this way, measuring similarities between actions leads to design a similarity measure between strings. We propose to define this string’s similarity using the global alignment kernel framework. In this context, the similarity between two aclets is computed by a novel soft evaluation method based on an enhanced gaussian kernel. The main advantage of the proposed approach lies in its ability to effectively deal with actions of different lengths or different temporal scales as well as with noise introduced during the features extraction step. The proposed method has been tested over three publicly available datasets, namely the MIVIA, the CAD and the MHAD, and the obtained results, compared with several state of the art approaches, confirm the effectiveness and the applicability of our system in real environments, where unexperienced operators can easily configure it.}, note = {Individual and Group Activities in Video Event Analysis}, keywords = {Soft assignment}, pubstate = {published}, tppubtype = {article} } In this paper we propose a method for human action recognition based on a string kernel framework. An action is represented as a string, where each symbol composing it is associated to an aclet, that is an atomic unit of the action encoding a feature vector extracted from raw data. In this way, measuring similarities between actions leads to design a similarity measure between strings. We propose to define this string’s similarity using the global alignment kernel framework. In this context, the similarity between two aclets is computed by a novel soft evaluation method based on an enhanced gaussian kernel. The main advantage of the proposed approach lies in its ability to effectively deal with actions of different lengths or different temporal scales as well as with noise introduced during the features extraction step. The proposed method has been tested over three publicly available datasets, namely the MIVIA, the CAD and the MHAD, and the obtained results, compared with several state of the art approaches, confirm the effectiveness and the applicability of our system in real environments, where unexperienced operators can easily configure it. |
Peter Hobson; Brian C. Lovell; Gennaro Percannella; Alessia Saggese; Mario Vento; Arnold Wiliem Computer Aided Diagnosis for Anti-Nuclear Antibodies HEp-2 images: Progress and challenges Journal Article Pattern Recognition Letters, 82, Part 1 , pp. 3 - 11, 2016, ISSN: 0167-8655, (Pattern recognition Techniques for Indirect Immunofluorescence Images Analysis). Abstract | BibTeX | Tag: Computer Aided Diagnoses | Links: @article{Hobson20163, title = {Computer Aided Diagnosis for Anti-Nuclear Antibodies HEp-2 images: Progress and challenges}, author = {Peter Hobson and Brian C. Lovell and Gennaro Percannella and Alessia Saggese and Mario Vento and Arnold Wiliem}, url = {http://www.sciencedirect.com/science/article/pii/S0167865516301374}, doi = {http://dx.doi.org/10.1016/j.patrec.2016.06.013}, issn = {0167-8655}, year = {2016}, date = {2016-01-01}, journal = {Pattern Recognition Letters}, volume = {82, Part 1}, pages = {3 - 11}, abstract = {The Anti-Nuclear Antibodies (ANA) test using Human Epithelial (HEp-2) cells has been the gold standard to identify the presence of Connective Tissue Diseases (CTD) such us Systemic Lupus Erythomatus (SLE). As the ANA test is time consuming, labor intensive and subjective, there has been an on-going effort to develop image-based Computer Aided Diagnosis (CAD) systems. This paper discusses the current progress and challenges in this field and will highlight areas which require more attention.}, note = {Pattern recognition Techniques for Indirect Immunofluorescence Images Analysis}, keywords = {Computer Aided Diagnoses}, pubstate = {published}, tppubtype = {article} } The Anti-Nuclear Antibodies (ANA) test using Human Epithelial (HEp-2) cells has been the gold standard to identify the presence of Connective Tissue Diseases (CTD) such us Systemic Lupus Erythomatus (SLE). As the ANA test is time consuming, labor intensive and subjective, there has been an on-going effort to develop image-based Computer Aided Diagnosis (CAD) systems. This paper discusses the current progress and challenges in this field and will highlight areas which require more attention. |
Peter Hobson; Brian C. Lovell; Gennaro Percannella; Alessia Saggese; Mario Vento; Arnold Wiliem HEp-2 staining pattern recognition at cell and specimen levels: Datasets, algorithms and results Journal Article Pattern Recognition Letters, 82, Part 1 , pp. 12 - 22, 2016, ISSN: 0167-8655, (Pattern recognition Techniques for Indirect Immunofluorescence Images Analysis). Abstract | BibTeX | Tag: Computer Aided Diagnoses | Links: @article{Hobson201612, title = {HEp-2 staining pattern recognition at cell and specimen levels: Datasets, algorithms and results}, author = {Peter Hobson and Brian C. Lovell and Gennaro Percannella and Alessia Saggese and Mario Vento and Arnold Wiliem}, url = {http://www.sciencedirect.com/science/article/pii/S0167865516301751}, doi = {http://dx.doi.org/10.1016/j.patrec.2016.07.013}, issn = {0167-8655}, year = {2016}, date = {2016-01-01}, journal = {Pattern Recognition Letters}, volume = {82, Part 1}, pages = {12 - 22}, abstract = {The Indirect Immunofluorescence (IIF) protocol applied on Human Epithelial type 2 (HEp-2) cells is the current gold standard for the Antinuclear Antibody (ANA) test. The formulation of the diagnosis requires the visual analysis of a patient’s specimen under a fluorescence microscope in order to recognize the cells’ staining pattern which could be related to a connective tissue disease. This analysis is time consuming and error prone, thus in the recent past we have witnessed a growing interest in the pattern recognition scientific community directed at the development of methods for supporting this complex task. The main driver of the interest towards this problem is represented by the series of international benchmarking initiatives organized in the last four years that allowed dozens of research groups to propose innovative methodologies for HEp-2 cells’ staining pattern classification. In this paper we update the state of the art on HEp-2 cells and specimens classification, by analyzing the performance achieved by the methods participating the contest on Performance Evaluation of IIF Image Analysis Systems, hosted by the 22nd edition of the International Conference on Pattern Recognition ICPR 2014, and to the Executable Thematic Special Issue of Pattern Recognition Letters on Pattern Recognition Techniques for IIF Images Analysis, and by highlighting the trends in the design of the best performing methods.}, note = {Pattern recognition Techniques for Indirect Immunofluorescence Images Analysis}, keywords = {Computer Aided Diagnoses}, pubstate = {published}, tppubtype = {article} } The Indirect Immunofluorescence (IIF) protocol applied on Human Epithelial type 2 (HEp-2) cells is the current gold standard for the Antinuclear Antibody (ANA) test. The formulation of the diagnosis requires the visual analysis of a patient’s specimen under a fluorescence microscope in order to recognize the cells’ staining pattern which could be related to a connective tissue disease. This analysis is time consuming and error prone, thus in the recent past we have witnessed a growing interest in the pattern recognition scientific community directed at the development of methods for supporting this complex task. The main driver of the interest towards this problem is represented by the series of international benchmarking initiatives organized in the last four years that allowed dozens of research groups to propose innovative methodologies for HEp-2 cells’ staining pattern classification. In this paper we update the state of the art on HEp-2 cells and specimens classification, by analyzing the performance achieved by the methods participating the contest on Performance Evaluation of IIF Image Analysis Systems, hosted by the 22nd edition of the International Conference on Pattern Recognition ICPR 2014, and to the Executable Thematic Special Issue of Pattern Recognition Letters on Pattern Recognition Techniques for IIF Images Analysis, and by highlighting the trends in the design of the best performing methods. |
2015 |
Pasquale Foggia; Nicolai Petkov; Alessia Saggese; Nicola Strisciuglio; Mario Vento Audio surveillance of roads: a system for detecting anomalous sounds Journal Article IEEE Transactions on Intelligent Transportation Systems, 17 , 2015. BibTeX | Tag: Audio analysis and interpretation | Links: @article{FoggiaRoads2015, title = {Audio surveillance of roads: a system for detecting anomalous sounds}, author = {Pasquale Foggia and Nicolai Petkov and Alessia Saggese and Nicola Strisciuglio and Mario Vento}, doi = {10.1109/TITS.2015.2470216}, year = {2015}, date = {2015-11-03}, journal = {IEEE Transactions on Intelligent Transportation Systems}, volume = {17}, keywords = {Audio analysis and interpretation}, pubstate = {published}, tppubtype = {article} } |
Nicola Strisciuglio; George Azzopardi; Mario Vento; Nicolai Petkov Unsupervised delineation of the vessel tree in retinal fundus images Conference Computational Vision and Medical Image Processing VIPIMAGE 2015, 2015, (Best Paper Award). BibTeX | Tag: Medical image analysis @conference{StrisciuglioRetina15, title = {Unsupervised delineation of the vessel tree in retinal fundus images}, author = {Nicola Strisciuglio and George Azzopardi and Mario Vento and Nicolai Petkov}, editor = {J. Tavares and R.M. Natal Jorge}, year = {2015}, date = {2015-10-19}, booktitle = {Computational Vision and Medical Image Processing VIPIMAGE 2015}, pages = {149-155}, note = {Best Paper Award}, keywords = {Medical image analysis}, pubstate = {published}, tppubtype = {conference} } |
Luc Brun; Gennaro Percannella; Alessia Saggese; Mario Vento Action recognition by using kernels on aclets sequences Journal Article Computer Vision and Image Understanding, 2015. BibTeX | Tag: Video analysis and interpretation | Links: @article{BrunAction2015, title = {Action recognition by using kernels on aclets sequences}, author = {Luc Brun and Gennaro Percannella and Alessia Saggese and Mario Vento}, doi = {10.1016/j.cviu.2015.09.003}, year = {2015}, date = {2015-09-24}, journal = {Computer Vision and Image Understanding}, keywords = {Video analysis and interpretation}, pubstate = {published}, tppubtype = {article} } |
Benoit Gaüzère; Pierluigi Ritrovato; Alessia Saggese; Mario Vento Human tracking using a top-down and knowledge based approach Conference International Conference on Image Analysis and Processing, 2015, 2015. BibTeX | Tag: Video analysis and interpretation @conference{Gauzere15, title = {Human tracking using a top-down and knowledge based approach}, author = {Benoit Gaüzère and Pierluigi Ritrovato and Alessia Saggese and Mario Vento}, year = {2015}, date = {2015-09-07}, booktitle = {International Conference on Image Analysis and Processing, 2015}, keywords = {Video analysis and interpretation}, pubstate = {published}, tppubtype = {conference} } |
Duber Martinez; Alessia Saggese; Mario Vento; Humberto Loaiza; Eduardo Caicedo Locally adapted gain control for reliable foreground detection Conference Proceedings of the International Conference on Computer Analysis of Images and Patterns, 2015, 2015. BibTeX | Tag: Image analysis and recognition @conference{Martinez15, title = {Locally adapted gain control for reliable foreground detection}, author = {Duber Martinez and Alessia Saggese and Mario Vento and Humberto Loaiza and Eduardo Caicedo}, year = {2015}, date = {2015-09-03}, booktitle = {Proceedings of the International Conference on Computer Analysis of Images and Patterns, 2015}, keywords = {Image analysis and recognition}, pubstate = {published}, tppubtype = {conference} } |
Pasquale Foggia; Benoit Gaüzère; Alessia Saggese; Mario Vento Human action recognition using an improved string edit distance Conference 12th IEEE International Conference on Advanced Video and Signal Based Surveillance, AVSS 2015, 2015. BibTeX | Tag: Video analysis and interpretation @conference{FoggiaActionEdit15, title = {Human action recognition using an improved string edit distance}, author = {Pasquale Foggia and Benoit Gaüzère and Alessia Saggese and Mario Vento}, year = {2015}, date = {2015-08-27}, booktitle = {12th IEEE International Conference on Advanced Video and Signal Based Surveillance, AVSS 2015}, journal = {1}, keywords = {Video analysis and interpretation}, pubstate = {published}, tppubtype = {conference} } |
Nicola Strisciuglio; George Azzopardi; Mario Vento; Nicolai Petkov Multiscale Blood Vessel Delineation Using B-COSFIRE Filters Book Chapter Azzopardi, George; Petkov, Nicolai (Ed.): Computer Analysis of Images and Patterns, 9257 , pp. 300-312, Springer International Publishing, 2015, ISBN: 978-3-319-23117-4. BibTeX | Tag: Image analysis and recognition, Medical image analysis | Links: @inbook{Strisciuglio15, title = {Multiscale Blood Vessel Delineation Using B-COSFIRE Filters}, author = {Nicola Strisciuglio and George Azzopardi and Mario Vento and Nicolai Petkov}, editor = {George Azzopardi and Nicolai Petkov}, url = {http://link.springer.com/chapter/10.1007%2F978-3-319-23117-4_26}, doi = {10.1007/978-3-319-23117-4_26}, isbn = {978-3-319-23117-4}, year = {2015}, date = {2015-08-26}, booktitle = {Computer Analysis of Images and Patterns}, volume = {9257}, pages = {300-312}, publisher = {Springer International Publishing}, series = {9257}, keywords = {Image analysis and recognition, Medical image analysis}, pubstate = {published}, tppubtype = {inbook} } |
Pasquale Foggia; Nicolai Petkov; Alessia Saggese; Nicola Strisciuglio; Mario Vento Car crashes detection by audio analysis in crowded roads Conference 2th IEEE International Conference on Advanced Video and Signal Based Surveillance, AVSS 2015, 2015. BibTeX | Tag: Audio analysis and interpretation @conference{FoggiaAvssAudio15, title = {Car crashes detection by audio analysis in crowded roads}, author = {Pasquale Foggia and Nicolai Petkov and Alessia Saggese and Nicola Strisciuglio and Mario Vento}, year = {2015}, date = {2015-08-26}, booktitle = {2th IEEE International Conference on Advanced Video and Signal Based Surveillance, AVSS 2015}, keywords = {Audio analysis and interpretation}, pubstate = {published}, tppubtype = {conference} } |
Vincenzo Carletti; Pasquale Foggia; Antonio Greco; Alessia Saggese; Mario Vento Automatic detection of long-term parked cars Conference 12th IEEE International Conference on Advanced Video and Signal Based Surveillance, AVSS 2015, 2015. BibTeX | Tag: Video analysis and interpretation @conference{CarlettiPark15, title = {Automatic detection of long-term parked cars}, author = {Vincenzo Carletti and Pasquale Foggia and Antonio Greco and Alessia Saggese and Mario Vento}, year = {2015}, date = {2015-08-25}, booktitle = {12th IEEE International Conference on Advanced Video and Signal Based Surveillance, AVSS 2015}, keywords = {Video analysis and interpretation}, pubstate = {published}, tppubtype = {conference} } |
Peter Hobson; Brian C. Lovell; Gennaro Percannella; Mario Vento; Arnold Wiliem Benchmarking human epithelial type 2 interphase cells classification methods on a very large dataset Journal Article Artificial Intelligence in Medicine, 2015. BibTeX | Tag: Medical image analysis | Links: @article{Hobson15, title = {Benchmarking human epithelial type 2 interphase cells classification methods on a very large dataset}, author = {Peter Hobson and Brian C. Lovell and Gennaro Percannella and Mario Vento and Arnold Wiliem}, doi = {10.1016/j.artmed.2015.08.001}, year = {2015}, date = {2015-08-13}, journal = {Artificial Intelligence in Medicine}, keywords = {Medical image analysis}, pubstate = {published}, tppubtype = {article} } |
Pasquale Foggia; Nicolai Petkov; Alessia Saggese; Nicola Strisciuglio; Mario Vento Reliable Detection of Audio Events in Highly Noisy Environments Journal Article Pattern Recognition Letters, 2015, ISSN: 0167-8655. BibTeX | Tag: Audio analysis and interpretation, Classification Paradigms | Links: @article{Foggia2015, title = {Reliable Detection of Audio Events in Highly Noisy Environments}, author = {Pasquale Foggia and Nicolai Petkov and Alessia Saggese and Nicola Strisciuglio and Mario Vento}, url = {http://www.sciencedirect.com/science/article/pii/S0167865515001981}, issn = {0167-8655}, year = {2015}, date = {2015-07-09}, journal = {Pattern Recognition Letters}, keywords = {Audio analysis and interpretation, Classification Paradigms}, pubstate = {published}, tppubtype = {article} } |
Vincenzo Carletti; Pasquale Foggia; Mario Vento VF2 Plus: An Improved version of VF2 for Biological Graphs Conference Graph-Based Representations in Pattern Recognition, 2015. BibTeX | Tag: Graph based classification and learning @conference{Carletti15VF2, title = {VF2 Plus: An Improved version of VF2 for Biological Graphs}, author = {Vincenzo Carletti and Pasquale Foggia and Mario Vento}, year = {2015}, date = {2015-05-15}, booktitle = {Graph-Based Representations in Pattern Recognition}, keywords = {Graph based classification and learning}, pubstate = {published}, tppubtype = {conference} } |
Vincenzo Carletti; Pasquale Foggia; Mario Vento; Xiaoyi Jiang Report on the First Contest on Graph Matching Algorithms for Pattern Search in Biological Databases Conference Graph-Based Representations in Pattern Recognition, 2015. BibTeX | Tag: Graph based classification and learning @conference{CarlettiContest15, title = {Report on the First Contest on Graph Matching Algorithms for Pattern Search in Biological Databases}, author = {Vincenzo Carletti and Pasquale Foggia and Mario Vento and Xiaoyi Jiang}, year = {2015}, date = {2015-05-15}, booktitle = {Graph-Based Representations in Pattern Recognition}, keywords = {Graph based classification and learning}, pubstate = {published}, tppubtype = {conference} } |
Vincenzo Carletti; Benoit Gaüzère; Luc Brun; Mario Vento Approximate Graph Edit Distance Computation Combining Bipartite Matching and Exact Neighborhood Substructure Distance Conference Graph-Based Representations in Pattern Recognition, 2015. BibTeX | Tag: Graph based classification and learning @conference{CarlettiEdit15, title = {Approximate Graph Edit Distance Computation Combining Bipartite Matching and Exact Neighborhood Substructure Distance}, author = {Vincenzo Carletti and Benoit Gaüzère and Luc Brun and Mario Vento}, year = {2015}, date = {2015-05-15}, booktitle = {Graph-Based Representations in Pattern Recognition}, keywords = {Graph based classification and learning}, pubstate = {published}, tppubtype = {conference} } |
Vincenzo Carletti; Pasquale Foggia; Alessia Saggese; Mario Vento A fast subgraph isomorphism algorithm for social networks graphs Conference Proceedings of the International Workshop on Social Network Analysis, 2015, 2015. BibTeX | Tag: Graph based classification and learning @conference{CarlettiARS15, title = {A fast subgraph isomorphism algorithm for social networks graphs}, author = {Vincenzo Carletti and Pasquale Foggia and Alessia Saggese and Mario Vento}, year = {2015}, date = {2015-04-01}, booktitle = {Proceedings of the International Workshop on Social Network Analysis, 2015}, keywords = {Graph based classification and learning}, pubstate = {published}, tppubtype = {conference} } |
Giovanni Acampora; Pasquale Foggia; Alessia Saggese; Mario Vento A Hierarchical Neuro-Fuzzy Architecture for Human Behavior Analysis Journal Article Information Sciences, 2015. Abstract | BibTeX | Tag: Video analysis and interpretation | Links: @article{inf_sci15, title = {A Hierarchical Neuro-Fuzzy Architecture for Human Behavior Analysis}, author = {Giovanni Acampora and Pasquale Foggia and Alessia Saggese and Mario Vento}, editor = {Elsevier}, url = {http://dx.doi.org/10.1016/j.ins.2015.03.021}, year = {2015}, date = {2015-03-20}, journal = {Information Sciences}, abstract = {Analysis and detection of human behaviors from video sequences has became recently a very hot research topic in computer vision and artificial intelligence. Indeed, human behavior understanding plays a fundamental role in several innovative application domains such as smart video surveillance, ambient intelligence and content-based video information retrieval. However, the uncertainty and vagueness that typically characterize human daily activities make frameworks for human behavior analysis (HBA) hard to design and develop. In order to bridge this gap, this paper proposes a hierarchical architecture, based on a tracking algorithm, time-delay neural networks and fuzzy inference systems, aimed at improving the performance of current HBA systems in terms of scalability, robustness and effectiveness in behavior detection. Precisely, the joint use of the aforementioned methodologies enables both a quantitative and qualitative behavioral analysis that efficiently face the intrinsic people/objects tracking imprecision and provide context aware and semantic capabilities for better identifying a given activity. The validity and effectiveness of the proposed framework have been verified by using the well-known CAVIAR dataset and comparing our system’s performance with other similar approaches working on the same dataset.}, keywords = {Video analysis and interpretation}, pubstate = {published}, tppubtype = {article} } Analysis and detection of human behaviors from video sequences has became recently a very hot research topic in computer vision and artificial intelligence. Indeed, human behavior understanding plays a fundamental role in several innovative application domains such as smart video surveillance, ambient intelligence and content-based video information retrieval. However, the uncertainty and vagueness that typically characterize human daily activities make frameworks for human behavior analysis (HBA) hard to design and develop. In order to bridge this gap, this paper proposes a hierarchical architecture, based on a tracking algorithm, time-delay neural networks and fuzzy inference systems, aimed at improving the performance of current HBA systems in terms of scalability, robustness and effectiveness in behavior detection. Precisely, the joint use of the aforementioned methodologies enables both a quantitative and qualitative behavioral analysis that efficiently face the intrinsic people/objects tracking imprecision and provide context aware and semantic capabilities for better identifying a given activity. The validity and effectiveness of the proposed framework have been verified by using the well-known CAVIAR dataset and comparing our system’s performance with other similar approaches working on the same dataset. |
Luc Brun; Pasquale Foggia; Alessia Saggese; Mario Vento Recognition of human actions using edit distance on aclet strings Inproceedings VISAPP 2015, 2015. BibTeX | Tag: Video analysis and interpretation @inproceedings{visapp15_ed, title = {Recognition of human actions using edit distance on aclet strings}, author = {Luc Brun and Pasquale Foggia and Alessia Saggese and Mario Vento}, year = {2015}, date = {2015-03-13}, booktitle = {VISAPP 2015}, keywords = {Video analysis and interpretation}, pubstate = {published}, tppubtype = {inproceedings} } |
Pasquale Foggia; Antonio Greco; Alessia Saggese; Mario Vento A method for detecting long term left baggage based on heat map Inproceedings VISAPP 2015, 2015. BibTeX | Tag: Video analysis and interpretation @inproceedings{visapp15_lb, title = {A method for detecting long term left baggage based on heat map}, author = {Pasquale Foggia and Antonio Greco and Alessia Saggese and Mario Vento}, year = {2015}, date = {2015-03-12}, booktitle = {VISAPP 2015}, keywords = {Video analysis and interpretation}, pubstate = {published}, tppubtype = {inproceedings} } |
Antonio d'Acierno; Alessia Saggese; Mario Vento Designing Huge Repositories of Moving Vehicles Trajectories for Efficient Extraction of Semantic Data Journal Article IEEE Transactions on Intelligent Transportation Systems, 2015. BibTeX | Tag: Video analysis and interpretation | Links: @article{its2015, title = {Designing Huge Repositories of Moving Vehicles Trajectories for Efficient Extraction of Semantic Data}, author = {Antonio d'Acierno and Alessia Saggese and Mario Vento}, editor = {IEEE}, url = {http://dx.doi.org/10.1109/TITS.2015.2390652}, year = {2015}, date = {2015-02-26}, journal = {IEEE Transactions on Intelligent Transportation Systems}, keywords = {Video analysis and interpretation}, pubstate = {published}, tppubtype = {article} } |
Mario Vento A long trip in the charming world of graphs for Pattern Recognition Journal Article Pattern Recognition, 2015, ISSN: 0031-3203. BibTeX | Tag: Graph based classification and learning | Links: @article{Vento2014, title = {A long trip in the charming world of graphs for Pattern Recognition}, author = {Mario Vento}, url = {http://www.sciencedirect.com/science/article/pii/S0031320314000053}, issn = {0031-3203}, year = {2015}, date = {2015-01-14}, journal = {Pattern Recognition}, keywords = {Graph based classification and learning}, pubstate = {published}, tppubtype = {article} } |
George Azzopardi; Nicola Strisciuglio; Mario Vento; Nicolai Petkov Trainable COSFIRE filters for vessel delineation with application to retinal images Journal Article Medical Image Analysis, 19 (1), pp. 46–57, 2015, ISSN: 1361-8415. Abstract | BibTeX | Tag: Image analysis and recognition, Medical image analysis | Links: @article{Azzopardi2014, title = {Trainable COSFIRE filters for vessel delineation with application to retinal images}, author = {George Azzopardi and Nicola Strisciuglio and Mario Vento and Nicolai Petkov}, url = {http://www.sciencedirect.com/science/article/pii/S1361841514001364}, issn = {1361-8415}, year = {2015}, date = {2015-01-14}, journal = {Medical Image Analysis}, volume = {19}, number = {1}, pages = {46–57}, abstract = {Retinal imaging provides a non-invasive opportunity for the diagnosis of several medical pathologies. The automatic segmentation of the vessel tree is an important pre-processing step which facilitates subsequent automatic processes that contribute to such diagnosis. We introduce a novel method for the automatic segmentation of vessel trees in retinal fundus images. We propose a filter that selectively responds to vessels and that we call B-COSFIRE with B standing for bar which is an abstraction for a vessel. It is based on the existing COSFIRE (Combination Of Shifted Filter Responses) approach. A B-COSFIRE filter achieves orientation selectivity by computing the weighted geometric mean of the output of a pool of Difference-of-Gaussians filters, whose supports are aligned in a collinear manner. It achieves rotation invariance efficiently by simple shifting operations. The proposed filter is versatile as its selectivity is determined from any given vessel-like prototype pattern in an automatic configuration process. We configure two B-COSFIRE filters, namely symmetric and asymmetric, that are selective for bars and bar-endings, respectively. We achieve vessel segmentation by summing up the responses of the two rotation-invariant B-COSFIRE filters followed by thresholding. The results that we achieve on three publicly available data sets (DRIVE: Se = 0.7655, Sp = 0.9704; STARE: Se = 0.7716, Sp = 0.9701; CHASE_DB1: Se = 0.7585, Sp = 0.9587) are higher than many of the state-of-the-art methods. The proposed segmentation approach is also very efficient with a time complexity that is significantly lower than existing methods.}, keywords = {Image analysis and recognition, Medical image analysis}, pubstate = {published}, tppubtype = {article} } Retinal imaging provides a non-invasive opportunity for the diagnosis of several medical pathologies. The automatic segmentation of the vessel tree is an important pre-processing step which facilitates subsequent automatic processes that contribute to such diagnosis. We introduce a novel method for the automatic segmentation of vessel trees in retinal fundus images. We propose a filter that selectively responds to vessels and that we call B-COSFIRE with B standing for bar which is an abstraction for a vessel. It is based on the existing COSFIRE (Combination Of Shifted Filter Responses) approach. A B-COSFIRE filter achieves orientation selectivity by computing the weighted geometric mean of the output of a pool of Difference-of-Gaussians filters, whose supports are aligned in a collinear manner. It achieves rotation invariance efficiently by simple shifting operations. The proposed filter is versatile as its selectivity is determined from any given vessel-like prototype pattern in an automatic configuration process. We configure two B-COSFIRE filters, namely symmetric and asymmetric, that are selective for bars and bar-endings, respectively. We achieve vessel segmentation by summing up the responses of the two rotation-invariant B-COSFIRE filters followed by thresholding. The results that we achieve on three publicly available data sets (DRIVE: Se = 0.7655, Sp = 0.9704; STARE: Se = 0.7716, Sp = 0.9701; CHASE_DB1: Se = 0.7585, Sp = 0.9587) are higher than many of the state-of-the-art methods. The proposed segmentation approach is also very efficient with a time complexity that is significantly lower than existing methods. |
Pasquale Foggia; Alessia Saggese; Mario Vento Real-time Fire Detection for Video Surveillance Applications using a Combination of Experts based on Color, Shape and Motion Journal Article IEEE Transactions on Circuits and Systems for Video Technology, 2015. BibTeX | Tag: Video analysis and interpretation | Links: @article{csvt2015_fire, title = {Real-time Fire Detection for Video Surveillance Applications using a Combination of Experts based on Color, Shape and Motion}, author = {Pasquale Foggia and Alessia Saggese and Mario Vento}, editor = {IEEE}, url = {http://dx.doi.org/10.1109/TCSVT.2015.2392531}, year = {2015}, date = {2015-01-14}, journal = {IEEE Transactions on Circuits and Systems for Video Technology}, keywords = {Video analysis and interpretation}, pubstate = {published}, tppubtype = {article} } |
2014 |
Rosario Di Lascio; Antonio Greco; Alessia Saggese; Mario Vento Improving fire detection reliability by a combination of videoanalytics Incollection Publishing, Springer International (Ed.): Image Analysis and Recognition, pp. 477-484, 2014, ISBN: 978-3-319-11757-7. BibTeX | Tag: Video analysis and interpretation | Links: @incollection{iciar2014, title = {Improving fire detection reliability by a combination of videoanalytics}, author = {Rosario Di Lascio and Antonio Greco and Alessia Saggese and Mario Vento}, editor = {Springer International Publishing}, url = {http://dx.doi.org/10.1007/978-3-319-11758-4_52}, isbn = {978-3-319-11757-7}, year = {2014}, date = {2014-10-15}, booktitle = {Image Analysis and Recognition}, pages = {477-484}, keywords = {Video analysis and interpretation}, pubstate = {published}, tppubtype = {incollection} } |
Pasquale Foggia; Alessia Saggese; Nicola Strisciuglio; Mario Vento Cascade Classifiers Trained on Gammatonegrams for Reliably Detecting Audio Events Inproceedings IEEE, (Ed.): IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS 2014), 2014, ISBN: 978-1-4799-4871-0/14. BibTeX | Tag: Audio analysis and interpretation @inproceedings{avss14_audio, title = {Cascade Classifiers Trained on Gammatonegrams for Reliably Detecting Audio Events}, author = {Pasquale Foggia and Alessia Saggese and Nicola Strisciuglio and Mario Vento}, editor = {IEEE}, isbn = {978-1-4799-4871-0/14}, year = {2014}, date = {2014-08-29}, booktitle = {IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS 2014)}, keywords = {Audio analysis and interpretation}, pubstate = {published}, tppubtype = {inproceedings} } |
Pasquale Foggia; Alessia Saggese; Nicola Strisciuglio; Mario Vento Exploiting the Deep Learning Paradigm for Recognizing Human Actions Inproceedings IEEE, (Ed.): IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS 2014), 2014, ISBN: 978-1-4799-4871-0/14. BibTeX | Tag: Video analysis and interpretation @inproceedings{avss14_deep, title = {Exploiting the Deep Learning Paradigm for Recognizing Human Actions}, author = {Pasquale Foggia and Alessia Saggese and Nicola Strisciuglio and Mario Vento}, editor = {IEEE}, isbn = {978-1-4799-4871-0/14}, year = {2014}, date = {2014-08-29}, booktitle = {IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS 2014)}, keywords = {Video analysis and interpretation}, pubstate = {published}, tppubtype = {inproceedings} } |
Luc Brun; Gennaro Percannella; Alessia Saggese; Mario Vento HacK: A System for the Recognition of Human Actions by Kernels of Visual Strings Inproceedings IEEE, (Ed.): IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS 2014), 2014, ISBN: 978-1-4799-4871-0/14. BibTeX | Tag: Video analysis and interpretation @inproceedings{avss14_string, title = {HacK: A System for the Recognition of Human Actions by Kernels of Visual Strings}, author = {Luc Brun and Gennaro Percannella and Alessia Saggese and Mario Vento}, editor = {IEEE}, isbn = {978-1-4799-4871-0/14}, year = {2014}, date = {2014-08-29}, booktitle = {IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS 2014)}, keywords = {Video analysis and interpretation}, pubstate = {published}, tppubtype = {inproceedings} } |
Luc Brun; Benito Cappellania; Alessia Saggese; Mario Vento Detection of Anomalous Driving Behaviors by Unsupervised Learning of Graphs Inproceedings IEEE, (Ed.): IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS 2014), 2014, ISBN: 978-1-4799-4871-0/14. BibTeX | Tag: Video analysis and interpretation @inproceedings{avss14_vrs1, title = {Detection of Anomalous Driving Behaviors by Unsupervised Learning of Graphs}, author = {Luc Brun and Benito Cappellania and Alessia Saggese and Mario Vento}, editor = {IEEE}, isbn = {978-1-4799-4871-0/14}, year = {2014}, date = {2014-08-29}, booktitle = {IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS 2014)}, keywords = {Video analysis and interpretation}, pubstate = {published}, tppubtype = {inproceedings} } |
Luc Brun; Alessia Saggese; Mario Vento A Reliable String Kernel based Approach for Solving Queries by Sketch Inproceedings IEEE, (Ed.): IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS 2014), 2014, ISBN: 978-1-4799-4871-0/14. BibTeX | Tag: Video analysis and interpretation @inproceedings{avss14_vrs2, title = {A Reliable String Kernel based Approach for Solving Queries by Sketch}, author = {Luc Brun and Alessia Saggese and Mario Vento}, editor = {IEEE}, isbn = {978-1-4799-4871-0/14}, year = {2014}, date = {2014-08-29}, booktitle = {IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS 2014)}, keywords = {Video analysis and interpretation}, pubstate = {published}, tppubtype = {inproceedings} } |
Peter Hobson; Brian C. Lovell; Gennaro Percannella; Mario Vento; Arnold Wiliem Classifying anti-nuclear antibodies HEp-2 images: A benchmarking platform Conference 22nd International Conference on Pattern Recognition, ICPR 2014, 2014, ISSN: 10514651, (cited By 1). BibTeX | Tag: | Links: @conference{Hobson20143233, title = {Classifying anti-nuclear antibodies HEp-2 images: A benchmarking platform}, author = {Peter Hobson and Brian C. Lovell and Gennaro Percannella and Mario Vento and Arnold Wiliem}, url = {http://www.scopus.com/inward/record.url?eid=2-s2.0-84919934897&partnerID=40&md5=6a6ca4c35da7b6c200ee79d7c6c6f842}, issn = {10514651}, year = {2014}, date = {2014-08-24}, booktitle = {22nd International Conference on Pattern Recognition, ICPR 2014}, journal = {Proceedings - International Conference on Pattern Recognition}, pages = {3233-3238}, note = {cited By 1}, keywords = {}, pubstate = {published}, tppubtype = {conference} } |
Alessia Saggese; Luc Brun; Mario Vento Detecting and indexing moving objects for Behavior Analysis by Video and Audio Interpretation Journal Article Electronic Letters on Computer Vision and Image Analysis, 13 (2), 2014, ISSN: 1577-5097. BibTeX | Tag: Audio analysis and interpretation, Video analysis and interpretation | Links: @article{elcvia_14, title = {Detecting and indexing moving objects for Behavior Analysis by Video and Audio Interpretation}, author = {Alessia Saggese and Luc Brun and Mario Vento}, url = {http://elcvia.cvc.uab.es/article/view/603}, issn = {1577-5097}, year = {2014}, date = {2014-06-07}, journal = {Electronic Letters on Computer Vision and Image Analysis}, volume = {13}, number = {2}, keywords = {Audio analysis and interpretation, Video analysis and interpretation}, pubstate = {published}, tppubtype = {article} } |
Alessia Saggese Behavior Analysis Book LAP LAMBERT Academic Publishing, 2014, ISBN: 978-3659529634. BibTeX | Tag: Audio analysis and interpretation, Image analysis and recognition, Video analysis and interpretation | Links: @book{saggese2014, title = {Behavior Analysis}, author = {Alessia Saggese}, editor = {LAP LAMBERT Academic Publishing}, url = {http://www.amazon.it/Behavior-Analysis-Detecting-indexing-Interpretation/dp/365952963X/ref=sr_1_1?ie=UTF8&qid=1398521162&sr=8-1&keywords=saggese+alessia}, isbn = {978-3659529634}, year = {2014}, date = {2014-04-10}, publisher = {LAP LAMBERT Academic Publishing}, keywords = {Audio analysis and interpretation, Image analysis and recognition, Video analysis and interpretation}, pubstate = {published}, tppubtype = {book} } |
Giulio Iannello; Gennaro Percannella; Paolo Soda; Mario Vento Mitotic cells recognition in HEp-2 images Journal Article Pattern Recognition Letters, 2014. BibTeX | Tag: Image analysis and recognition @article{hep2_2014_prl, title = {Mitotic cells recognition in HEp-2 images}, author = {Giulio Iannello and Gennaro Percannella and Paolo Soda and Mario Vento}, year = {2014}, date = {2014-03-12}, journal = {Pattern Recognition Letters}, keywords = {Image analysis and recognition}, pubstate = {published}, tppubtype = {article} } |
Pasquale Foggia; Gennaro Percannella; Alessia Saggese; Mario Vento Pattern recognition in stained HEp-2 cells: Where are we now? Journal Article Pattern Recognition, 2014, ISSN: 0031-3203. BibTeX | Tag: Image analysis and recognition | Links: @article{hep2_2014_pr, title = {Pattern recognition in stained HEp-2 cells: Where are we now?}, author = {Pasquale Foggia and Gennaro Percannella and Alessia Saggese and Mario Vento}, url = {http://www.sciencedirect.com/science/article/pii/S0031320314000284}, issn = {0031-3203}, year = {2014}, date = {2014-02-12}, journal = {Pattern Recognition}, keywords = {Image analysis and recognition}, pubstate = {published}, tppubtype = {article} } |
Pasquale Foggia; Gennaro Percannella; Paolo Soda; Mario Vento Special issue on the analysis and recognition of indirect immuno-fluorescence images Journal Article Pattern Recognition, pp. 2303-2304, 2014. BibTeX | Tag: Medical image analysis @article{si_pr_2015, title = {Special issue on the analysis and recognition of indirect immuno-fluorescence images}, author = {Pasquale Foggia and Gennaro Percannella and Paolo Soda and Mario Vento}, year = {2014}, date = {2014-01-15}, journal = {Pattern Recognition}, pages = {2303-2304}, keywords = {Medical image analysis}, pubstate = {published}, tppubtype = {article} } |
Luc Brun; Alessia Saggese; Mario Vento Dynamic Scene Understanding for behavior analysis based on string kernels Journal Article Circuits and Systems for Video Technology, IEEE Transactions on, 24 (10), pp. 1669 - 1681, 2014, ISSN: 1051-8215. Abstract | BibTeX | Tag: Video analysis and interpretation | Links: @article{6727519, title = {Dynamic Scene Understanding for behavior analysis based on string kernels}, author = {Luc Brun and Alessia Saggese and Mario Vento}, url = {http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6727519&sortType%3Dasc_p_Sequence%26filter%3DAND%28p_IS_Number%3A6913590%29}, issn = {1051-8215}, year = {2014}, date = {2014-01-01}, journal = {Circuits and Systems for Video Technology, IEEE Transactions on}, volume = {24}, number = {10}, pages = {1669 - 1681}, abstract = {This paper aims at dynamically understanding the properties of a scene from the analysis of moving object trajectories. Two different applications are proposed: the former is devoted to identify abnormal behaviors, while the latter allows to extract the (k) , most of the similar trajectories to the one hand-drawn by an human operator. A set of normal trajectories’ models is extracted using a novel unsupervised learning technique: the scene is adaptively partitioned into zones using the distribution of the training set and each trajectory is represented as a sequence of symbols by considering positional information (the zones crossed in the scene), speed, and shape. The main novelty is the use of a kernel-based approach for evaluating the similarity between the trajectories. Furthermore, we define a novel and efficient kernel-based clustering algorithm, aimed at obtaining groups of normal trajectories. Experimentations, conducted over three standard data sets, confirm the effectiveness of the proposed approach.}, keywords = {Video analysis and interpretation}, pubstate = {published}, tppubtype = {article} } This paper aims at dynamically understanding the properties of a scene from the analysis of moving object trajectories. Two different applications are proposed: the former is devoted to identify abnormal behaviors, while the latter allows to extract the (k) , most of the similar trajectories to the one hand-drawn by an human operator. A set of normal trajectories’ models is extracted using a novel unsupervised learning technique: the scene is adaptively partitioned into zones using the distribution of the training set and each trajectory is represented as a sequence of symbols by considering positional information (the zones crossed in the scene), speed, and shape. The main novelty is the use of a kernel-based approach for evaluating the similarity between the trajectories. Furthermore, we define a novel and efficient kernel-based clustering algorithm, aimed at obtaining groups of normal trajectories. Experimentations, conducted over three standard data sets, confirm the effectiveness of the proposed approach. |
Vincenzo Carletti; Luca Del Pizzo; Gennaro Percannella; Mario Vento Foreground detection optimization for SoCs embedded on Smart Cameras Conference Proceedings of the 8th ACM/IEEE International Conference on Distributed Smart Cameras, ICDSC 2014, 2014, ISBN: 978-145032925-5, (cited By 0). BibTeX | Tag: | Links: @conference{Carletti2014, title = {Foreground detection optimization for SoCs embedded on Smart Cameras}, author = {Vincenzo Carletti and Luca Del Pizzo and Gennaro Percannella and Mario Vento}, url = {http://www.scopus.com/inward/record.url?eid=2-s2.0-84913586650&partnerID=40&md5=b5d3604c63763f29408dd27b9893d6b9}, isbn = {978-145032925-5}, year = {2014}, date = {2014-01-01}, booktitle = {Proceedings of the 8th ACM/IEEE International Conference on Distributed Smart Cameras, ICDSC 2014}, journal = {Proceedings of the 8th ACM/IEEE International Conference on Distributed Smart Cameras, ICDSC 2014}, note = {cited By 0}, keywords = {}, pubstate = {published}, tppubtype = {conference} } |
2013 |
Pasquale Foggia; Gennaro Percannella; Mario Vento Graph matching and learning in Pattern Recognition in the last 10 years Journal Article International Journal of Pattern Recognition and Artificial Intelligence, 2013, ISBN: 1793-6381. BibTeX | Tag: Graph based classification and learning | Links: @article{foggia13_ijprai, title = {Graph matching and learning in Pattern Recognition in the last 10 years}, author = {Pasquale Foggia and Gennaro Percannella and Mario Vento}, url = {http://www.worldscientific.com/doi/abs/10.1142/S0218001414500013}, isbn = {1793-6381}, year = {2013}, date = {2013-12-09}, journal = {International Journal of Pattern Recognition and Artificial Intelligence}, keywords = {Graph based classification and learning}, pubstate = {published}, tppubtype = {article} } |
Pasquale Foggia; Gennaro Percannella; Alessia Saggese; Mario Vento Recognizing Human Actions by a bag of visual words Inproceedings IEEE International Conference on Systems, Man and Cybernetics, IEEE SMC 2013, 2013. BibTeX | Tag: Video analysis and interpretation @inproceedings{smc-13, title = {Recognizing Human Actions by a bag of visual words}, author = {Pasquale Foggia and Gennaro Percannella and Alessia Saggese and Mario Vento}, year = {2013}, date = {2013-10-15}, booktitle = {IEEE International Conference on Systems, Man and Cybernetics, IEEE SMC 2013}, keywords = {Video analysis and interpretation}, pubstate = {published}, tppubtype = {inproceedings} } |
Publications
2016 |
Online human assisted and cooperative pose estimation of 2D cameras Journal Article Expert Systems with Applications, 60 , pp. 258–268, 2016. |
Time-frequency analysis for audio event detection in real scenarios Inproceedings 2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 438-443, 2016. |
Improving reliability of people tracking by adding semantic reasoning Inproceedings 2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 194-199, 2016. |
Gender recognition from face images with trainable COSFIRE filters Inproceedings 2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 235-241, 2016. |
Towards semantic context-aware drones for aerial scenes understanding Journal Article 2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 00 (undefined), pp. 115-121, 2016. |
Audio Surveillance of Roads: A System for Detecting Anomalous Sounds Journal Article IEEE Transactions on Intelligent Transportation Systems, 17 (1), pp. 279-288, 2016, ISSN: 1524-9050. |
Gender Recognition from Face Images Using a Fusion of SVM Classifiers Book Chapter Campilho, Aurélio; Karray, Fakhri (Ed.): Image Analysis and Recognition: 13th International Conference, ICIAR 2016, in Memory of Mohamed Kamel, P'ovoa de Varzim, Portugal, July 13-15, 2016, Proceedings, pp. 533–538, Springer International Publishing, Cham, 2016, ISBN: 978-3-319-41501-7. |
Shape Normalizing and Tracking Dancing Worms Book Chapter Robles-Kelly, Antonio; Loog, Marco; Biggio, Battista; Escolano, Francisco; Wilson, Richard (Ed.): Structural, Syntactic, and Statistical Pattern Recognition: Joint IAPR International Workshop, S+SSPR 2016, M'erida, Mexico, November 29 - December 2, 2016, Proceedings, pp. 390–400, Springer International Publishing, Cham, 2016, ISBN: 978-3-319-49055-7. |
Supervised vessel delineation in retinal fundus images with the automatic selection of B-COSFIRE filters Journal Article Machine Vision and Applications, 27 (8), pp. 1137–1149, 2016, ISSN: 1432-1769. |
Action recognition by using kernels on aclets sequences Journal Article Computer Vision and Image Understanding, 144 , pp. 3 - 13, 2016, ISSN: 1077-3142, (Individual and Group Activities in Video Event Analysis). |
Computer Aided Diagnosis for Anti-Nuclear Antibodies HEp-2 images: Progress and challenges Journal Article Pattern Recognition Letters, 82, Part 1 , pp. 3 - 11, 2016, ISSN: 0167-8655, (Pattern recognition Techniques for Indirect Immunofluorescence Images Analysis). |
HEp-2 staining pattern recognition at cell and specimen levels: Datasets, algorithms and results Journal Article Pattern Recognition Letters, 82, Part 1 , pp. 12 - 22, 2016, ISSN: 0167-8655, (Pattern recognition Techniques for Indirect Immunofluorescence Images Analysis). |
2015 |
Audio surveillance of roads: a system for detecting anomalous sounds Journal Article IEEE Transactions on Intelligent Transportation Systems, 17 , 2015. |
Unsupervised delineation of the vessel tree in retinal fundus images Conference Computational Vision and Medical Image Processing VIPIMAGE 2015, 2015, (Best Paper Award). |
Action recognition by using kernels on aclets sequences Journal Article Computer Vision and Image Understanding, 2015. |
Human tracking using a top-down and knowledge based approach Conference International Conference on Image Analysis and Processing, 2015, 2015. |
Locally adapted gain control for reliable foreground detection Conference Proceedings of the International Conference on Computer Analysis of Images and Patterns, 2015, 2015. |
Human action recognition using an improved string edit distance Conference 12th IEEE International Conference on Advanced Video and Signal Based Surveillance, AVSS 2015, 2015. |
Multiscale Blood Vessel Delineation Using B-COSFIRE Filters Book Chapter Azzopardi, George; Petkov, Nicolai (Ed.): Computer Analysis of Images and Patterns, 9257 , pp. 300-312, Springer International Publishing, 2015, ISBN: 978-3-319-23117-4. |
Car crashes detection by audio analysis in crowded roads Conference 2th IEEE International Conference on Advanced Video and Signal Based Surveillance, AVSS 2015, 2015. |
Automatic detection of long-term parked cars Conference 12th IEEE International Conference on Advanced Video and Signal Based Surveillance, AVSS 2015, 2015. |
Benchmarking human epithelial type 2 interphase cells classification methods on a very large dataset Journal Article Artificial Intelligence in Medicine, 2015. |
Reliable Detection of Audio Events in Highly Noisy Environments Journal Article Pattern Recognition Letters, 2015, ISSN: 0167-8655. |
VF2 Plus: An Improved version of VF2 for Biological Graphs Conference Graph-Based Representations in Pattern Recognition, 2015. |
Report on the First Contest on Graph Matching Algorithms for Pattern Search in Biological Databases Conference Graph-Based Representations in Pattern Recognition, 2015. |
Approximate Graph Edit Distance Computation Combining Bipartite Matching and Exact Neighborhood Substructure Distance Conference Graph-Based Representations in Pattern Recognition, 2015. |
A fast subgraph isomorphism algorithm for social networks graphs Conference Proceedings of the International Workshop on Social Network Analysis, 2015, 2015. |
A Hierarchical Neuro-Fuzzy Architecture for Human Behavior Analysis Journal Article Information Sciences, 2015. |
Recognition of human actions using edit distance on aclet strings Inproceedings VISAPP 2015, 2015. |
A method for detecting long term left baggage based on heat map Inproceedings VISAPP 2015, 2015. |
Designing Huge Repositories of Moving Vehicles Trajectories for Efficient Extraction of Semantic Data Journal Article IEEE Transactions on Intelligent Transportation Systems, 2015. |
A long trip in the charming world of graphs for Pattern Recognition Journal Article Pattern Recognition, 2015, ISSN: 0031-3203. |
Trainable COSFIRE filters for vessel delineation with application to retinal images Journal Article Medical Image Analysis, 19 (1), pp. 46–57, 2015, ISSN: 1361-8415. |
Real-time Fire Detection for Video Surveillance Applications using a Combination of Experts based on Color, Shape and Motion Journal Article IEEE Transactions on Circuits and Systems for Video Technology, 2015. |
2014 |
Improving fire detection reliability by a combination of videoanalytics Incollection Publishing, Springer International (Ed.): Image Analysis and Recognition, pp. 477-484, 2014, ISBN: 978-3-319-11757-7. |
Cascade Classifiers Trained on Gammatonegrams for Reliably Detecting Audio Events Inproceedings IEEE, (Ed.): IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS 2014), 2014, ISBN: 978-1-4799-4871-0/14. |
Exploiting the Deep Learning Paradigm for Recognizing Human Actions Inproceedings IEEE, (Ed.): IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS 2014), 2014, ISBN: 978-1-4799-4871-0/14. |
HacK: A System for the Recognition of Human Actions by Kernels of Visual Strings Inproceedings IEEE, (Ed.): IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS 2014), 2014, ISBN: 978-1-4799-4871-0/14. |
Detection of Anomalous Driving Behaviors by Unsupervised Learning of Graphs Inproceedings IEEE, (Ed.): IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS 2014), 2014, ISBN: 978-1-4799-4871-0/14. |
A Reliable String Kernel based Approach for Solving Queries by Sketch Inproceedings IEEE, (Ed.): IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS 2014), 2014, ISBN: 978-1-4799-4871-0/14. |
Classifying anti-nuclear antibodies HEp-2 images: A benchmarking platform Conference 22nd International Conference on Pattern Recognition, ICPR 2014, 2014, ISSN: 10514651, (cited By 1). |
Detecting and indexing moving objects for Behavior Analysis by Video and Audio Interpretation Journal Article Electronic Letters on Computer Vision and Image Analysis, 13 (2), 2014, ISSN: 1577-5097. |
Behavior Analysis Book LAP LAMBERT Academic Publishing, 2014, ISBN: 978-3659529634. |
Mitotic cells recognition in HEp-2 images Journal Article Pattern Recognition Letters, 2014. |
Pattern recognition in stained HEp-2 cells: Where are we now? Journal Article Pattern Recognition, 2014, ISSN: 0031-3203. |
Special issue on the analysis and recognition of indirect immuno-fluorescence images Journal Article Pattern Recognition, pp. 2303-2304, 2014. |
Dynamic Scene Understanding for behavior analysis based on string kernels Journal Article Circuits and Systems for Video Technology, IEEE Transactions on, 24 (10), pp. 1669 - 1681, 2014, ISSN: 1051-8215. |
Foreground detection optimization for SoCs embedded on Smart Cameras Conference Proceedings of the 8th ACM/IEEE International Conference on Distributed Smart Cameras, ICDSC 2014, 2014, ISBN: 978-145032925-5, (cited By 0). |
2013 |
Graph matching and learning in Pattern Recognition in the last 10 years Journal Article International Journal of Pattern Recognition and Artificial Intelligence, 2013, ISBN: 1793-6381. |
Recognizing Human Actions by a bag of visual words Inproceedings IEEE International Conference on Systems, Man and Cybernetics, IEEE SMC 2013, 2013. |