1. A General-Purpose Framework for Genetic Improvement
    F. Marino, G. Squillero, A. Tonda
    Parallel Problem Solving from Nature - PPSN XIV
    DOI: 10.1007/978-3-319-45823-6_32
    KEYWORDS: genetic improvement, genetic programming, linear genetic programming, software engineering
    ABSTRACT: Genetic Improvement is an evolutionary-based technique. Despite its relatively recent introduction, several successful applications have been already reported in the scientific literature: it has been demonstrated able to modify the code complex programs without modifying their intended behavior; to increase performance with regards to speed, energy consumption or memory use. Some results suggest that it could be also used to correct bugs, restoring the software's intended functionalities. Given the novelty of the technique, however, instances of Genetic Improvement so far rely upon ad-hoc, language-specific implementations. In this paper, we propose a general framework based on the software engineering's idea of mutation testing coupled with Genetic Programming, that can be easily adapted to different programming languages and objective. In a preliminary evaluation, the framework efficiently optimizes the code of the md5 hash function in C, Java, and Python

  2. A New Simulation-Based Fault Injection Approach for the Evaluation of Transient Errors in GPGPUs
    S. Azimi, B. Du, L. Sterpone
    Lecture notes in Computer Science
    DOI: 10.1007/978-3-319-30695-7_29
    KEYWORDS: gpgpu, radiation effects, seu, set, reliability
    ABSTRACT: General Purpose Graphics Processing Units (GPGPUs) are increasingly adopted thanks to their high computational capabilities. GPGPUs are preferable to CPUs for a large range of computationally intensive applications, not necessarily related to computer graphics. Within the high performance computing context, GPGPUs must require a large amount of resources and have plenty execution units. GPGPUs are becoming attractive for safety-critical applications where the phenomenon of transient errors is a major concern. In this paper we propose a novel transient error fault injection simulation methodology for the accurate simulation of GPGPUs applications during the occurrence of transient errors. The developed environment allows to inject transient errors within all the memory area of GPGPUs and into not user-accessible resources such as in streaming processors combinational logic and sequential elements. The capability of the fault injection simulation platform has been evaluated testing three benchmark applications including mitigation approaches. The amount of computational costs and time measured is minimal thus enabling the usage of the developed approach for effective transient errors evaluation

  3. A Selective Mapper for the Mitigation of SETs on Rad-Hard RTG4 Flash-based FPGAs
    S. Azimi, B. Du, L. Sterpone
    IEEE RADECS 2016
    KEYWORDS: fpga, mitigation, rtg4

  4. A Suite of IEEE 1687 Benchmark Networks
    A. T{\vs}ertov, A. Jutman, S. Devadze, M. Sonza Reorda, E. Larsson, F. Ghani Zadegan, R. Cantoro, M. Montazeri, R. Krenz-Baath
    ABSTRACT: The saturation of the IJTAG concept and its approval as the IEEE 1687 standard in 2014 has generated a wave of research activities and created demand for a set of appropriate and challenging benchmarks. This paper presents such a set developed by an industrial and academic consortium and constructed in a way that facilitates objective comparison of experimental results across research groups as well as represents challenging network examples exhaustively utilizing features and constructs defined by the standard. The suite is arranged in four comprehensive categories, each having its particular purpose and composition principles, as described in the paper. We have also made an analysis of limitations of previous popular and ad-hoc benchmark sets as these limitations majorly motivated our current action. The new public-domain benchmarks are distributed together with source files and documentation through the dedicated web site. Some of the previous research results on IEEE 1687 have been reapplied on the new benchmarks set, thus creating an important initial reference point for the research community.

  5. A low-cost susceptibility analysis methodology to selectively harden logic circuits
    I. Wali, B. Deveautour, A. Virazel, A. Bosio, P. Girard, M. Reorda
    2016 21th IEEE European Test Symposium (ETS)
    DOI: 10.1109/ETS.2016.7519296

  6. A neural network model based on co-occurrence matrix for fall prediction
    H. Masoud, R. Ferrero, B. Montrucchio, M. Rebaudengo
    KEYWORDS: fall prediction, fall avoidance system, fall prevention, health care system
    ABSTRACT: Fall avoidance systems reduce injuries due to unintentional falls, but most of them are fall detections that activate an alarm after the fall occurrence. Since predicting a fall is the most promising approach to avoid a fall injury, this study proposes a method based on new features and multilayer perception that outperforms state-of-the-art approaches. Since accelerometer and gyroscope embedded in a smartphone are recognized to be precise enough to be used in fall avoidance systems, they have been exploited in an experimental analysis in order to compare the proposal with state-of-the-art approaches. The results have shown that the proposed approach improves the accuracy from 83% to 90%

    E. Arco, P. Boccardo, F. Gandino, A. Lingua, F. Noardo, M. Rebaudengo
    DOI: 10.5194/isprs-annals-IV-4-W1-67-2016
    KEYWORDS: pollution, semantic web, environmental monitoring, dynamic sensors, standard data models, internet of things
    ABSTRACT: Air quality is a factor of primary importance for the quality of life. The increase of the pollutants percentage in the air can cause serious problems to the human and environmental health. For this reason it is essential to monitor its values to prevent the consequences of an excessive concentration, to reduce the pollution production or to avoid the contact with major pollutant concentration through the available tools. Some recently developed tools for the monitoring and sharing of the data in an effective system permit to manage the information in a smart way, in order to improve the knowledge of the problem and, consequently, to take preventing measures in favour of the urban air quality and human health. In this paper, the authors describe an innovative solution that implements geomatics sensors (GNSS) and pollutant measurement sensors to develop a low cost sensor for the acquisition of pollutants dynamic data using a mobile platform based on bicycles. The acquired data can be analysed to evaluate the local distribution of pollutant density and shared through web platforms that use standard protocols for an effective smart use

  8. Accurate Analysis of SET effects on Flash-based FPGA System-on-a-Chip for Satellite Application
    G. Raoul, M. David, F. Luca
    KEYWORDS: fpgas, radiation effects, set, seu, see

  9. Accurate Analysis of SET effects on Flash-based FPGA System-on-a-Chip for Satellite Applications
    S. Azimi, B. Du, L. Sterpone
    KEYWORDS: fpga, radiation, set, see, seu, fault tolerance

  10. Accurate Analysis of SET effects on Flash-based FPGA System-on-a-Chip for Satellite Applications
    S. Azimi, B. Du, L. Sterpone, R. Grimoldi, L. Fossati, D. Codinachs
    IEEE DDECS 2016
    KEYWORDS: fpgas, radiation effects, set, seu, see

  11. An FPGA-based testing platform for the validation of automotive powertrain ECU
    B. Du, L. Sterpone
    2016 IFIP/IEEE International Conference on Very Large Scale Integration, VLSI-SoC 2016
    DOI: 10.1109/VLSI-SoC.2016.7753553
    KEYWORDS: fpga; gtm; hardware and architecture; electrical and electronic engineering; automotive; ecu validation; etpu

  12. An effective approach for functional test programs compaction
    A. Touati, A. Bosio, P. Girard, A. Virazel, P. Bernardi, M. Reorda
    Formal Proceedings of the 2016 IEEE 19th International Symposium on Design and Diagnostics of Electronic Circuits and Systems, DDECS 2016
    DOI: 10.1109/DDECS.2016.7482466
    KEYWORDS: safety, risk, reliability and quality; electrical and electronic engineering; hardware and architecture; test compaction; sbst; microprocessor test; functional test

  13. Analysis and optimization of Synchronization Algorithms for Multicore Architectures
    M. Hemmatpour, R. Ferrero, B. Montrucchio, M. Rebaudengo
    KEYWORDS: multicore design, synchronization techniques, parallel programming

  14. Analysis of radiation-induced SEUs on dynamic reconfigurable systems
    L. Boragno, L. Sterpone, D.M. Codinachs
    DOI: 10.1109/ReCoSoC.2016.7533907
    KEYWORDS: fpga, reconfigurable, seu, dynamic reconfiguration, tmr, fault injection
    ABSTRACT: SRAM-Based FPGAs are widely employed in space and avionics computing. The unfriendly environment and FPGA radiation sensibility can have dramatic drawbacks on the application reliability. The partial self-reconfiguration ability gives an excellent aid to counteract single event upsets (SEUs) caused by excessive silicon ionization, and the consequent system misbehavior. Related to this feature, fault injection and fault emulation and configuration scrubbing, has been carried out over three versions of a reconfigurable Fast Fourier Transform (FFT) system: a single FFT, a single larger FFT and a FFT with TMR architecture. The analysis has been focused on multiple injected SEUs scenario, considering the availability problem in a real-time application and highlighting the circuit tolerance at the upset presence. This operation has the goal to emulate as much as possible a real radiation test avoiding all the handicaps that this procedure involves. The obtained results have shown the advantages of the configuration scrubbing performed with the aim to fix multiple upsets, achieving up to 13.6% of circuit hardening. The achieved conclusions are an interesting starting point for the study of fault mitigation techniques through the use of reconfiguration. The projects have been tested on a Z-7010 AP SoC

  15. Analysis of the effects of soft errors on compression algorithms through fault injection inside program variables
    S. Avramenko, M. Reorda, M. Violante, G. Fey
    LATS 2016 - 17th IEEE Latin-American Test Symposium
    DOI: 10.1109/LATW.2016.7483332
    KEYWORDS: electrical and electronic engineering; safety, risk, reliability and quality; hardware and architecture; soft errors; reliability; highlevel fault injection; compression algorithm
    ABSTRACT: Data logging applications, such as those deployed in satellite launchers to acquire telemetry data, may require compression algorithms to cope with large amounts of data as well as limited storage and communication capabilities. When commercial-off-the-shelf hardware components are used to implement such applications, radiation-induced soft errors may occur, especially during the last stages of the launcher cruise, potentially affecting the algorithm execution. The purpose of this work is to analyze two compression algorithms using fault injection to evaluate their robustness against soft errors. The main contribution of the work is the analysis of the compression algorithm susceptibility by attacking their data structures (also referred as program variables) rather than the memory elements of the computing platform in charge of the algorithm execution. This approach is agnostic of the downstream implementation details. Instead, the intrinsic robustness of compression algorithms can be evaluated quickly, and system-level decisions can be taken before the computing platform is finalized

  16. Automatic generation of stimuli for fault diagnosis in IEEE 1687 networks
    R. Cantoro, M. Montazeri, M. Sonza Reorda, F. Zadegan, E. Larsson
    DOI: 10.1109/IOLTS.2016.7604692
    ABSTRACT: The IEEE 1687 standard describes reconfigurable structures allowing to flexibly access the instruments existing within devices (e.g., to support test, debug, calibration, etc.), by the use of configurable modules acting as controllable switches. The increasing adoption of this standard requires the availability of algorithms and tools to automate its usage. Since the resulting networks could inevitably be affected by defects which may prevent their correct usage, solutions allowing not only to test against these defects, but also to diagnose them (i.e., to identify the location of possible faults) are of uttermost importance. This paper proposes a method to automatically generate suitable test stimuli: by applying them and observing the output of the network one can not only detect possible faults, but also identify the fault responsible for the misbehavior. Experimental results gathered on a set of benchmark networks with a prototypical tool implementing the proposed techniques show the feasibility and provide a first idea about the length of the required input stimuli

  17. Challenging Anti-virus Through Evolutionary Malware Obfuscation
    M. Gaudesi, A. Marcelli, E. Sanchez, G. Squillero, A. Tonda
    Challenging Anti-virus Through Evolutionary Malware Obfuscation
    KEYWORDS: security malware packer computational-intelligence evolutionary algorithms
    ABSTRACT: The use of anti-virus software has become something of an act of faith. A recent study showed that more than 80 % of all personal computers have anti-virus software installed. However, the protection mechanisms in place are far less effective than users would expect. Malware analysis is a classical example of cat-and-mouse game: as new anti-virus techniques are developed, malware authors respond with new ones to thwart analysis. Every day, anti-virus companies analyze thousands of malware that has been collected through honeypots, hence they restrict the research to only already existing viruses. This article describes a novel method for malware obfuscation based an evolutionary opcode generator and a special ad-hoc packer. The results can be used by the security industry to test the ability of their system to react to malware mutations

  18. Development of an automated test system for ECU software validation: an industrial experience
    E. Bagalini, M. Violante
    DOI: 10.1109/BEC.2016.7743739
    KEYWORDS: verification; validation; model based software design; hardware-in-the-loop (hil); fault-injection; rapid prototyping; engine test bench; ecu; rapid prototyping; engine test bench; ecu; verification; validation; model based software design; hardware-in-the-loop (hil); fault-injection
    ABSTRACT: Hardware-in-the-loop (HIL) and fault injection testing are widely used in automotive industry to validate hardware and software architectures as best practice and in fulfillment with international functional safety standards. Time and economical investments can constitute an obstacle to the development of effective testing systems, especially for small and medium automotive industries. This paper presents a solution we developed to balance cost with model capabilities and simulation efficiency. The adoption of a model-based approach and the use of a mix of real and emulated sensors and actuators allowed to meet cost and temporal constraints. The presented solution has been developed in six months and is currently adopted to validate new engine control strategies, reducing the testing effort up to 90% compared to manual tests.

  19. Effective generation and evaluation of diagnostic SBST programs
    A. Riefert, R. Cantoro, M. Sauer, M. Reorda, B. Becker
    Proceedings of the IEEE VLSI Test Symposium
    DOI: 10.1109/VTS.2016.7477279
    KEYWORDS: automatic test pattern generation, circuit faults, fault detection, fault diagnosis, interpolation, model checking, program processors
    ABSTRACT: Functional test and software-based self-test (SBST) approaches for processors are becoming popular as they enable low-cost production tests and are often the only solution for in-field tests. With the increasing use of volume diagnosis, efficient and cost-effective diagnosis methods are required. A high quality functional or SBST test program can be used to perform logic fault diagnosis with low-cost test equipment and therefore significantly reduce the cost of diagnosis. We present a framework for the automatic generation of functional diagnostic sequences for stuck-at faults. The framework allows a user to specify constraints imposed by the employed test environment and generates diagnostic sequences satisfying these constraints. Furthermore, the framework is able to prove the equivalence of faults under the specified constraints. This enables to compute the best possible diagnostic quality that can be reached under the given environmental constraints. Also, it gives the necessary information for implementing selective DFT techniques in order to differentiate faults which cannot be distinguished otherwise. In our experiments we evaluated a MIPS-like processor. The results show that our approach can effectively distinguish fault pairs or prove their equivalence, under different environmental constraints. To the best, of our knowledge, this is the first approach which, enables the automatic generation of diagnostic SBST, programs and allows to effectively prove the equivalence of faults in functional and SBST test environments

  20. Eigenwalk: a Novel Feature for Walk Classification and Fall Prediction
    M. Hemmatpour, R. Ferrero, B. Montrucchio, M. Rebaudengo
    KEYWORDS: fall, elderly, health care
    ABSTRACT: Predicting a fall is one of the most promising approaches to avoid it. Different studies strive to classify abnormal and normal walks in order to predict a fall before its occurrence. This study introduces eigenwalk, a novel feature based on the principal components of the accelerometer and gyroscope signals. This feature, in conjunction with a random forest classifier, is able to distinguish walk patterns and to estimate a fall risk. As the accelerometer and the gyroscope embedded in a smartphone are recognized to be precise enough for fall avoidance systems, they have been exploited in an experimental analysis in order to compare the proposed approach with the most recent ones. The results have shown that the new feature in combination with the random forest classification outperforms state-of-the-art approaches, by improving the accuracy up to 98.6%

  21. Evolutionary deckbuilding in hearthstone
    P. Garcia-Sanchez, A. Tonda, G. Squillero, A. Mora, J. Merelo
    Proceedings of Computational Intelligence and Games (CIG), 2016
    DOI: 10.1109/CIG.2016.7860426
    KEYWORDS: crystals; standards; buildings; games; evolutionary computation; artificial intelligence; electronic mail
    ABSTRACT: One of the most notable features of collectible card games is deckbuilding, that is, defining a personalized deck before the real game. Deckbuilding is a challenge that involves a big and rugged search space, with different and unpredictable behaviour after simple card changes and even hidden information. In this paper, we explore the possibility of automated deckbuilding: a genetic algorithm is applied to the task, with the evaluation delegated to a game simulator that tests every potential deck against a varied and representative range of human-made decks. In these preliminary experiments, the approach has proven able to create quite effective decks, a promising result that proves that, even in this challenging environment, evolutionary algorithms can find good solutions.

  22. Exploiting accelerometers to estimate displacement
    R. Ferrero, F. Gandino, M. Hemmatpour, B. Montrucchio, M. Rebaudengo
    KEYWORDS: accelerometer, kalman filter, position tracking, displacement
    ABSTRACT: Although the acceleration is physically related to the displacement of an object, i.e., to its change of position, it is demonstrated that the double integration of the acceleration does not provide accurate information about the displacement, due to the noise and measurement errors. This paper evaluates a correction technique based on the Kalman filter in order to increase the accuracy of the estimation of the displacement. Experiments were performed by acquiring the acceleration with an off-the-shelf accelerometer: the percentage error made by simply integrating the acceleration measurements may arrive to 68% in the general case of a movement in the space, but it can be dramatically reduced to 9% with the proposed approach. An even better behavior is obtained when the movement is constrained to a plane or along an axis

  23. FPGA-controlled PCBA power-on self-test using processor's debug features
    B. Du, E. Sanchez, M. Reorda, J. Acle, A. Tsertov
    Formal Proceedings of the 2016 IEEE 19th International Symposium on Design and Diagnostics of Electronic Circuits and Systems, DDECS 2016
    DOI: 10.1109/DDECS.2016.7482458
    KEYWORDS: safety, risk, reliability and quality; electrical and electronic engineering; hardware and architecture

  24. Faster-than-at-speed execution of functional programs: an experimental analysis
    P. Bernardi, A. Bosio, G. Natale, A. Guerriero, F. Venini
    IFIP/IEEE International Conference on Very Large Scale Integration (VLSI-SoC)

  25. Hybrid soft error mitigation techniques for COTS processor-based systems
    E. Chielle, B. Du, F. Kastensmidt, S. Cuenca-Asensi, L. Sterpone, M. Reorda
    LATS 2016 - 17th IEEE Latin-American Test Symposium
    DOI: 10.1109/LATW.2016.7483347
    KEYWORDS: electrical and electronic engineering; safety, risk, reliability and quality; hardware and architecture; watchdog; software-based techniques; soft errors; reliability; performance degradation; memory overhead; fault tolerance; fault coverage; error detection; cots processors; aerospace applications

  26. Improving the Functional Test Delay Fault Coverage: A Microprocessor Case Study
    A. Touati, A. Bosio, P. Girard, A. Virazel, P. Bernardi, M. Reorda
    2016 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)
    DOI: 10.1109/ISVLSI.2016.42

  27. In-field functional test programs development flow for embedded FPUs
    R. Cantoro, D. Piumatti, P. Bernardi, S. De Luca, A. Sansonetti
    DOI: 10.1109/DFT.2016.7684079

  28. Kanzi: A Distributed, In-memory Key-Value Store
    M. Hemmatpour, B. Montrucchio, M. Rebaudengo
    KEYWORDS: in-memory key-value store, distributed system, rdma programming, high performance facility
    ABSTRACT: Traditional database systems either sacrifice availability or partitionability at the cost of offering strict consistency guarantee of data. However, the significant growth of Web-scale applications and the wider array of emerging workloads demand revisiting the need for full transactional consistency. One new dominant class of workload is the ability to efficiently support single statement transaction consisting of either Get or Put operation; thus, simplifying the consistency model. These simple workloads have given rise to decade-long efforts for building efficient key-value stores that often rely on disk-resident and log-structured storage model that is distributed across many machines. To further expand the scope of key-value stores, in this paper, we introduce Kanzi, a distributed, in-memory key-value stored over shared-memory architecture enabled by remote direct memory access (RDMA) technology. The simple data and transaction model of our proposed Kanzi additionally may serve as a generic (embedded) caching layer to speed up any disk-resident data-intensive workloads

  29. MPDEA 2016 chairs' welcome & organization
    G. Squillero, A. Tonda
    GECCO 2016 Companion - Proceedings of the 2016 Genetic and Evolutionary Computation Conference
    DOI: 10.1145/2908961.2931650
    KEYWORDS: diversity promotion; evolutionary algorithms; software; computer science applications1707 computer vision and pattern recognition; computational theory and mathematics
    ABSTRACT: n.a.

  30. On the consolidation of mixed criticalities applications on multicore architectures
    S. Esposito, S. Avramenko, M. Violante
    2016 17th Latin-American Test Symposium (LATS)
    DOI: 10.1109/LATW.2016.7483340
    KEYWORDS: hardware, multicore processing, real-time systems, redundancy, software, n-versioning, temporal triple module redundancy (ttmr), triple module redundancy (tmr), fault tolerance, soft errors, software implemented fault tolerance (sift)
    ABSTRACT: Multicore architectures are very appealing as they offer the capability of integrating federated architectures, where multiple independent computing elements are devoted to specific tasks, into a single device, allowing significant mass and power savings. Often, the tasks in the federated architectures are responsible for mixed criticalities tasks, i.e. some of them are mission-/safety-critical real-time tasks, while others are non-critical tasks. When consolidating mixed criticalities tasks on multicore architectures, designers must guarantee that each core does not interfere with the others, introducing side effects not possible in federated architectures. In this paper we propose a hybrid solution based on a combination of known techniques: lightweight hardware redundancy, implemented using smart watchdogs and voter logic, cooperates with software redundancy, implemented using software temporal triple module redundancy for those tasks with low criticality and no real-time requirement, and software triple module redundancy for tasks with high criticality and real-time requirement. To guarantee lack of interference, a hypervisor is used to segregate the execution of each task in a dedicated resource partition. Preliminary experimental results are reported on a prototypical vision-based navigation system

  31. On the diagnostic analysis of IEEE 1687 networks
    R. Cantoro, M. Montazeri, M. Reorda, F. Zadegan, E. Larsson
    DOI: 10.1109/ETS.2016.7519294
    KEYWORDS: ijtag, scan networks, testing, diagnosis
    ABSTRACT: The IEEE 1687 standard describes reconfigurable structures allowing to flexibly access the instruments existing within devices (e.g., to support test, diagnosis, calibration, etc.), by using configuration modules which act as controllable switches. The increasing adoption of this standard requires the availability of algorithms and tools to automate its usage. The resulting networks might be affected by defects preventing their correct operation. This necessitates the availability of solutions which allow not only to test against defects, but also to identify the location of possible faults via diagnosis. This paper for the first time addresses the problem of the diagnosis of IEEE 1687 networks. Experimental results gathered on a set of benchmark networks show the feasibility of the solution and provide a first idea about the length of the required input stimuli

  32. On the robustness of DCT-based compression algorithms for space applications
    S. Avramenko, M. Sonza Reorda, M. Violante, G. Fey, Mess,Jan-Gerd, R. Schmidt
    IEEE International Symposium on On-Line Testing and Robust System Design
    DOI: 10.1109/IOLTS.2016.7604656
    KEYWORDS: soft errors; lossy compression algorithm; discrete cosine transform; reliability; register-level fault injection; commercial off-the-shelf
    ABSTRACT: High compression ratio is crucial to cope with the large amounts of data produced by telemetry sensors and the limited transmission bandwidth typical of space applications. A new generation of telemetry units is under development, based on Commercial Off-The-Shelf (COTS) components that may be subject to misbehaviors due to radiation-induced soft errors. The purpose of this paper is to study the impact of soft errors on different configurations of a discrete cosine transform (DCT)-based compression algorithm. This work's main contribution lies in providing some design guidelines

  33. Online Time Interference Detection in Mixed-Criticality Applications on Multicore Architectures using Performance Counters
    S. Esposito, M. Violante, M. Sozzi, M. Terrone, M. Traversone
    22nd IEEE International Symposium on On-Line Testing and Robust System Design
    KEYWORDS: hard real-time; safety critical applications; performance counters; fault detection; mixed-criticalities; multicore processing
    ABSTRACT: In this paper a novel technique is proposed for online detection of timing interference in multicore architectures. The technique is aimed at mixed-criticality workloads. This paper describes a method to use hardware performance counters to detect such misbehaviors. Experimental data is gathered, showing the viability of this method. The method can be used as safety-net in several scheduling approaches

  34. Portfolio Optimization, a Decision-Support Methodology for Small Budgets
    I. Deplano, G. Squillero, A. Tonda
    Applications of Evolutionary Computation
    DOI: 10.1007/978-3-319-31204-0_5
    KEYWORDS: portfolio optimization; portfolio model; financial forecasting; mlp; multi-objective optimization; som; artificial neural networks
    ABSTRACT: Several machine learning paradigms have been applied to financial forecasting, attempting to predict the market's behavior, with the final objective of profiting from trading shares. While anticipating the performance of such a complex system is far from trivial, this issue becomes even harder when the investors do not have large amounts of money available. In this paper, we present an evolutionary portfolio optimizer for the management of small budgets. The expected returns are modeled resorting to Multi-layer Perceptrons, trained on past market data, and the portfolio composition is chosen by approximating the solution to a multi-objective constrained problem. An investment simulator is then used to measure the portfolio performance. The proposed approach is tested on real-world data from Milan stock exchange, exploiting information from January 2000 to June 2010 to train the framework, and data from July 2010 to August 2011 to validate it. The presented tool is finally proven able to obtain a more than satisfying profit for the considered time frame

  35. Promoting diversity in evolutionary algorithms: An updated bibliography
    G. Squillero, A. Tonda
    GECCO 2016 Companion - Proceedings of the 2016 Genetic and Evolutionary Computation Conference
    DOI: 10.1145/2908961.2931651
    KEYWORDS: evolutionary algorithms; software; computer science applications1707 computer vision and pattern recognition; computational theory and mathematics; diversity promotion
    ABSTRACT: This short paper contains an extended list of references to diversity preservation methodologies, classified following the taxonomy presented in a previous publication. The list has been updated according to the contributions sent to the workshop "Measuring and Promoting Diversity in Evolutionary Computation", held during the conference GECCO 2016.

  36. Rejuvenation of nbti-impacted processors using evolutionary generation of assembler programs
    F. Pellerey, M. Jenihhin, G. Squillero, J. Raik, M. Sonza Reorda, V. Tihhomirov, R. Ubar
    Proceedings of the Asian Test Symposium
    DOI: 10.1109/ATS.2016.57
    KEYWORDS: hardware rejuvenation; nbti; processor designs; electrical and electronic engineering; aging; critical path identification; evolutionary computation
    ABSTRACT: The time-dependent variation caused by Negative Bias Temperature Instability (NBTI) is agreed to be one of the main reliability concerns in integrated circuits implemented with current nanotechnology nodes. NBTI increases the threshold voltage of pMOS transistors: hence' it slows down signal propagation along logic paths between flip-flops. It may cause intermittent faults and' ultimately' permanent functional failures in processor circuits. In this paper' we study an NBTI mitigation approach in processor designs by rejuvenation of pMOS transistors along NBTI-critical paths. The method incorporates hierarchical fast' yet accurate modelling of NBTI-induced delays at transistor' gate and path levels for generation of rejuvenation Assembler programs using an Evolutionary Algorithm. These programs are applied further as an execution overhead to drive those pMOS transistors to the recovery phase' which are the most critical for the NBTI-induced path delay in processors. The experimental results demonstrate efficiency of evolutionary generation and significant reduction of NBTI-induced delays by the rejuvenation stimuli with an execution overhead of 0.1% or less. The proposed approach aims at extending the reliable lifetime of nanoelectronic processors.

  37. Scalable FPGA Graph model to detect routing faults
    L. Sterpone, G. Cabodi, S. Finocchiaro, C. Loiacono, F. Savarese, B. Du
    IEEE International Symposium on On-Line Testing and Robust System Design
    DOI: 10.1109/IOLTS.2016.7604690
    KEYWORDS: measurement; field programmable gate arrays; circuit faults; routing; integrated circuit modeling; computational modeling; integrated circuit interconnections
    ABSTRACT: The SRAM cells that form the configuration memory of an SRAM-based FPGA make such FPGAs particularly vulnerable to soft errors. A soft error occurs when ionizing radiation corrupts the data stored in a circuit. The error persists until new data is written. Soft errors have long been recognized as a potential problem as radiation can come from a variety of sources. This paper presents an FPGA fault model focusing on routing aspects. A graph model of SRAM nodes behavior in case of fault, starting from netlist description of well known FPGA models, is presented. It is also performed a classification of possible logical effects of a soft error in the configuration bit controlling, providing statistics on the possible numbers of faults. Finally it is reported the definition of fault metrics computed on a set of complex benchmarks proving the effectiveness of our approach.

  38. Test Time Minimization in Reconfigurable Scan Networks
    R. Cantoro, M. Palena, P. Pasini, M. Sonza Reorda
    ABSTRACT: Modern devices often include several embedded instruments, such as BISTs, sensors, and other analog components. New standards, such as IEEE Std. 1687, provide vehicles to access these instruments. In approaches based on reconfigurable scan networks, instruments are coupled with scan registers, connected into chains and interleaved with reconfigurable multiplexers, permitting a selective access to different parts of the chain. A similar scenario is also supported by IEEE Std. 1149.1-2013, where a test data register can be constructed as a chain of multiple segments, some of which can be excluded or mutually selected. The test of permanent faults affecting a reconfigurable scan network requires to shift test patterns throughout a certain number of network configurations. This paper presents a method to select the list of configurations needed to apply the complete test set in the minimum amount of clock cycles. The method is based on a graph representation of the problem. Experimental results on some benchmark networks are provided, together with a comparison with other approaches based on heuristics. The provided results can be effectively used to evaluate the test time of sub-optimal approaches.

  39. Thermal issues in test: An overview of the significant aspects and industrial practice
    J. Alt, P. Bernardi, A. Bosio, R. Cantoro, H. Kerkhoff, A. Leininger, W. Molzer, A. Motta, C. Pacha, A. Pagani, A. Rohani, R. Strasser
    Proceedings of the IEEE VLSI Test Symposium
    DOI: 10.1109/VTS.2016.7477278
    KEYWORDS: automotive; component; functional test; thermal-aware test; yield
    ABSTRACT: Thermal phenomena occurring along test execution at the final stages of the manufacturing flow are considered as a significant issue for several reasons, including dramatic effects like circuit damage that is leading to yield loss. This paper tries to redeem those bad guys in order to exploit them to improve the test quality, reducing the overall test cost without affecting the yield

  40. Tutorials at PPSN 2016
    C. Doerr, N. Bredeche, E. Alba, T. Bartz-Beielstein, D. Brockhoff, B. Doerr, G. Eiben, M. Epitropakis, C. Fonseca, A. Guerreiro, E. Haasdijk, J. Heinerman, J. Hubert, P. Lehre, L. Malagò, J. Merelo, J. Miller, B. Naujoks, P. Oliveto, S. Picek, N. Pillay, M. Preuss, P. Ryser-Welch, G. Squillero, J. Stork, D. Sudholt, A. Tonda, D. Whitley, M. Zaefferer
    Parallel Problem Solving from Nature - PPSN XIV
    DOI: 10.1007/978-3-319-45823-6_95
    ABSTRACT: PPSN 2016 hosts a total number of 16 tutorials covering a broad range of current research in evolutionary computation. The tutorials range from introductory to advanced and specialized but can all be attended without prior requirements. All PPSN attendees are cordially invited to take this opportunity to learn about ongoing research activities in our field!

  41. baseline walking dataset exploiting accelerometer and gyroscope for fall prediction and prevention systems
    M. Hemmatpour, R. Ferrero, B. Montrucchio, M. Rebaudengo
    KEYWORDS: fall, prevention, healthcare, body monitoring
    ABSTRACT: Fall datasets usually record normal activities and transitions from one posture to another one with falls. Many fall detection datasets based on different sensors are adopted by researchers to improve their systems. Although fall avoidance are dramatically increasing, a public fall prediction and prevention dataset based on an accelerometer and gyroscope is absent. So, this study creates a dataset based on the state-of-the-art techniques in simulating a fall. Different techniques are evaluated to find the best fall simulation. Since accelerometer and gyroscope sensors embedded in a smartphone are recognized to be suited for fall avoidance systems, in this study, they are used to obtain data from users. At the end, some statistical analysis of the observed data are presented and a nonlinear regression model is proposed