Publications

  1. A cost-effective proposal for an RFID-based system for agri-food traceability
    R. Ferrero, F. Gandino, B. Montrucchio, M. Rebaudengo
    INTERNATIONAL JOURNAL OF AD HOC AND UBIQUITOUS COMPUTING, in press
    KEYWORDS: agri-food; fruit; traceability; tracking; rfid; supply chain automation; information management; warehouse
    ABSTRACT: Agri-food companies, operating in the packaging, storage and distribution of fruit and vegetables, need to be provided with information systems able to meet the requirements imposed by the current European regulations in terms of traceability. This paper evaluates the benefits and drawbacks of a semi-automated information management tracking system for a warehouse specialized in the fruit market. It is targeted to small and medium-sized companies, with limited financial means for investments and without technical support in their premises. These requirements are met by using a PDA equipped with an RFID reader: the information collected throughout the production process are locally stored in the PDA and occasionally sent to a server. In this way the proposed system does not rely neither on a widespread wireless network, nor on fixed RFID readers, which can increase automation, but need more investment and assistance

  2. A security protocol for RFID traceability
    F. Gandino, B. Montrucchio, M. Rebaudengo
    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, in press
    DOI: 10.1002/dac.3109
    KEYWORDS: rfid; security; public key; supply chain
    ABSTRACT: Nowadays, traceability represents a key activity in many sectors. Many modern traceability systems are based on radio-frequency identification (RFID) technology. However, the distributed information stored on RFID tags involves new security problems. This paper presents the traceability multi-entity cryptography, a high-level data protection scheme based on public key cryptography that is able to protect RFID data for traceability and chain activities. This scheme is able to manage entities with different permissions, and it is especially suitable for applications that require complex Information Systems. Traceability multi-entity cryptography avoids industrial espionage, guarantees the information authenticity, protects the customer privacy, and detects malicious alterations of the information. In contrast to the state-of-the-art RFID security schemes, the proposed protocol is applicable to standard RFID tags without any cryptographic capability, and it does not require a central database

  3. (Over-)Realism in evolutionary computation: Commentary on "On the Mapping of Genotype to Phenotype in Evolutionary Algorithms" by Peter A. Whigham, Grant Dick, and James Maclaurin
    G. Squillero, A. Tonda
    GENETIC PROGRAMMING AND EVOLVABLE MACHINES, 2017
    DOI: 10.1007/s10710-017-9295-y
    KEYWORDS: evolutionary computation
    ABSTRACT: Inspiring metaphors play an important role in the beginning of an investigation, but are less important in a mature research field as the real phenomena involved are understood. Nowadays, in evolutionary computation, biological analogies should be taken into consideration only if they deliver significant advantages.

  4. A High-Level Approach to Analyze the Effects of Soft Errors on Lossless Compression Algorithms
    S. Avramenko, M. Sonza Reorda, M. Violante, G. Fey
    JOURNAL OF ELECTRONIC TESTING, 2017
    DOI: 10.1007/s10836-016-5637-6
    KEYWORDS: high-level fault injection; lossless compression; reliability; soft errors; electrical and electronic engineering
    ABSTRACT: In space applications, the data logging sub-system often requires compression to cope with large amounts of data as well as with limited storage and communication capabilities. The usage of Commercial off-the-Shelf (COTS) hardware components is becoming more common, since they are particularly suitable to meet high performance requirements and also to cut the cost with respect to space qualified ones. On the other side, given the characteristics of the space environment, the usage of COTS components makes radiation-induced soft errors highly probable. The purpose of this work is to analyze a set of lossless compression algorithms in order to compare their robustness against soft errors. The proposed approach works on the unhardened version of the programs, aiming to estimate their intrinsic robustness. The main contribution of the work lies in the investigation of the possibility of performing an early comparison between different compression algorithms at a high level, by only considering their data structures (corresponding to program variables). This approach is virtually agnostic of the downstream implementation details. This means that the proposed approach aims to perform a comparison (in terms of robustness against soft errors) between the considered programs before the final computing platform is defined. The results of the high-level analysis can also be used to collect useful information to optimize the hardening phase. Experimental results based on the OpenRISC processor are reported. They suggest that when properly adopted, the proposed approach makes it possible to perform a comparison between a set of compression algorithms, even with a very limited knowledge of the target computing system

  5. A Key Distribution Scheme for Mobile Wireless Sensor Networks: Q-s-Composite
    F. Gandino, R. Ferrero, M. Rebaudengo
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2017
    DOI: 10.1109/TIFS.2016.2601061
    KEYWORDS: safety; risk; reliability and quality; computer networks and communications; key management; random predistribution; wsn
    ABSTRACT: The majority of security systems for wireless sensor networks are based on symmetric encryption. The main open issue for these approaches concerns the establishment of symmetric keys. A promising key distribution technique is the random predistribution of secret keys. Despite its effectiveness, this approach presents considerable memory overheads, in contrast with the limited resources of wireless sensor networks. In this paper, an in-depth analytical study is conducted on the state-of-the-art key distribution schemes based on random predistribution. A new protocol, called q-s-composite, is proposed in order to exploit the best features of random predistribution and to improve it with lower requirements. The main novelties of q-s-composite are represented by the organization of the secret material that allows a storing reduction, by the proposed technique for pairwise key generation, and by the limited number of predistributed keys used in the generation of a pairwise key. A comparative analysis demonstrates that the proposed approach provides a higher level of security than the state-of-the-art schemes.

  6. A Low-Cost Reliability vs. Cost Trade-Off Methodology to Selectively Harden Logic Circuits
    I. Wali, B. Deveautour, A. Virazel, A. Bosio, P. Girard, M. Sonza Reorda
    JOURNAL OF ELECTRONIC TESTING, 2017
    DOI: 10.1007/s10836-017-5640-6

  7. Evaluation of transient errors in GPGPUs for safety critical applications: An effective simulation-based fault injection environment
    S. Azimi, B. Du, L. Sterpone
    JOURNAL OF SYSTEMS ARCHITECTURE, 2017
    DOI: 10.1016/j.sysarc.2017.01.009
    KEYWORDS: fault injection; fault tolerant system; gpgpu; transient errors; validations; safety critical applications
    ABSTRACT: General Purpose Graphics Processing Units (GPGPUs) are increasingly adopted thanks to their high computational capabilities. GPGPUs are preferable to CPUs for a large range of computationally intensive applications, not necessarily related to computer graphics. Within the high performance computing context, GPGPUs must require a large amount of resources and have plenty execution units. GPGPUs are becoming attractive for safety-critical applications where the phenomenon of transient errors is a major concern. In this paper we propose a novel transient error fault injection simulation methodology for the accurate simulation of GPGPUs applications during the occurrence of transient errors. The developed environment allows to inject transient errors within all the memory area of GPGPUs and into not user-accessible resources such as in streaming processors combinational logic and sequential elements. The capability of the fault injection simulation platform has been evaluated testing three benchmark applications including mitigation approaches such as Duplication With Comparison, Triple Modular Redundancy and Algorithm Based Fault Tolerance. The amount of computational costs and time measured is minimal thus enabling the usage of the developed approach for effective transient errors evaluation.

  8. Microprocessor Testing: Functional Meets Structural Test
    A. Touati, P. Girard, A. Virazel
    JOURNAL OF CIRCUITS, SYSTEMS, AND COMPUTERS, 2017
    DOI: 10.1142/S0218126617400072

  9. On the consolidation of mixed criticalities applications on multicore architectures
    S. Esposito, M. Violante
    JOURNAL OF ELECTRONIC TESTING, 2017
    DOI: 10.1007/s10836-016-5636-7
    KEYWORDS: mixed criticality; software implemented fault tolerance; hybrid ar-chitecture; multicore systems; real time applications
    ABSTRACT: In this paper we propose a hybrid solution to ensure results correctness when deploying several applications with different safety requirements on a single multi-core-based system. The proposed solution is based on lightweight hardware redundancy, implemented using smart watchdogs and voter logic, combined with software redundancy. Two techniques of software redundancy are used: the first one is software temporal triple modular redundancy, used for those tasks with low crit-icality and no real-time requirement. The second software redundancy technique is triple module redundancy for tasks with high criticality and real-time requirements, assisted by a hardware voter. A hypervisor is used to separate each task in the sys-tem in an independent resource partition, thus ensuring that no functional interfer-ence is occurring. The proposed solution has been evaluated through hardware and software fault injection on two hardware platforms, featuring a dual-core processor and a quad-core processor respectively. Results show a high fault tolerance achieved using the proposed architecture

  10. A Fault-Tolerant Ripple-Carry Adder with Controllable-Polarity Transistors
    H. Mohammadi, P. Gaillardon, J. Zhang, G. Micheli, E. Sanchez, M. Reorda
    ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS, 2016
    DOI: 10.1145/2988234

  11. A Flexible Framework for the Automatic Generation of SBST Programs
    A. Riefert, R. Cantoro, M. Sauer, M. Sonza Reorda, B. Becker
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2016
    DOI: 10.1109/TVLSI.2016.2538800
    KEYWORDS: sbst, testing, atpg, sat
    ABSTRACT: Software-based self-test (SBST) techniques are used to test processors and processor cores against permanent faults introduced by the manufacturing process or to perform in-field test in safety-critical applications. However, the generation of an SBST program is usually associated with high costs as it requires significant manual effort of a skilled engineer with in-depth knowledge about the processor under test. In this paper, we propose an approach for the automatic generation of SBST programs. First, we detail an automatic test pattern generation (ATPG) framework for the generation of functional test sequences. Second, we describe the extension of this framework with the concept of a validity checker module (VCM), which allows the specification of constraints with regard to the generated sequences. Third, we use the VCM to express typical constraints that exist when SBST is adopted for in-field test. In our experimental results, we evaluate the proposed approach with a microprocessor without interlocked pipeline stages (MIPS)-like microprocessor. The results show that the proposed method is the first approach able to automatically generate SBST programs for both end-of-manufacturing and in-field test whose fault efficiency is superior to those produced by state-of-the-art manual approaches

  12. A Hybrid Fault-Tolerant Architecture for Highly Reliable Processing Cores
    I. Wali, A. Virazel, A. Bosio, P. Girard, S. Pravossoudovitch, M. Sonza Reorda
    JOURNAL OF ELECTRONIC TESTING, 2016
    DOI: 10.1007/s10836-016-5578-0
    KEYWORDS: fault tolerance, microprocessor, single event transient, permanent fault, delay fault, power consumption, high dependability, fault injection
    ABSTRACT: Increasing vulnerability of transistors and interconnects due to scaling is continuously challenging the reliability of future microprocessors. Lifetime reliability is gaining attention over performance as a design factor even for lower-end commodity applications. In this work we present a low-power Hybrid fault tolerant architecture for reliability improvement of pipelined microprocessors by protecting their combinational logic parts. The architecture can handle a broad spectrum of faults with little impact on performance by combining different types of redundancies. Moreover, it addresses the problem of error propagation behavior of nonlinear pipelines and error detection in pipeline stages with memory interfaces. Our case-study implementation of fault tolerant MIPS microprocessors highlights four main advantages of the proposed solution. It offers (i) 11.6% power saving, (ii) improved transient error detection capability, (iii) lifetime reliability improvement, and (iv) better fault accumulation effect handling, in comparison with TMR architectures. We also present a gate-level fault-injection framework that offers high fidelity to model physical defects and transient faults

  13. A mobile and low-cost system for environmental monitoring: a case study
    A. Velasco, R. Ferrero, F. Gandino, B. Montrucchio, M. Rebaudengo
    SENSORS, 2016
    DOI: 10.3390/s16050710
    KEYWORDS: air monitoring; air pollution; wireless sensor networks; mobile sensors
    ABSTRACT: Northern Italy has one of the highest air pollution levels in the European Union. This paper describes a mobile wireless sensor network system intended to complement the already existing official air-quality monitoring systems of the metropolitan town of Torino. The system is characterized by a high portability and low cost, in both acquisition and maintenance. The high portability of the system aims to improve the spatial distribution and resolution of the measurements from the official static monitoring stations. Commercial PM10 and O3 sensors were incorporated into the system, and were subsequently tested in a controlled environment and on the field. Test on the field, performed in collaboration with the local environmental agency, revealed that the sensors can provide accurate data if properly calibrated and maintained. Further tests were carried out by mounting the system on bicycles in order to increase their mobility.

  14. An Error-Detection and Self-Repairing Method for Dynamically and Partially Reconfigurable Systems
    M. Sonza Reorda, L. Sterpone, A. Ullah
    IEEE TRANSACTIONS ON COMPUTERS, 2016
    DOI: 10.1109/TC.2016.2607749
    KEYWORDS: transient errors; performances; self-repairing; fpga; dynamic reconfiguration; partial reconfiguration
    ABSTRACT: Reconfigurable systems are gaining an increasing interest in the domain of safety-critical applications, for example in the space and avionic domains. In fact, the capability of reconfiguring the system during run-time execution and the high computational power of modern Field Programmable Gate Arrays (FPGAs) make these devices suitable for intensive data processing tasks. Moreover, such systems must also guarantee the abilities of self-awareness, self-diagnosis and self-repair in order to cope with errors due to the harsh conditions typically existing in some environments. In this paper we propose a selfrepairing method for partially and dynamically reconfigurable systems applied at a fine-grain granularity level. Our method is able to detect, correct and recover errors using the run-time capabilities offered by modern SRAM-based FPGAs. Fault injection campaigns have been executed on a dynamically reconfigurable system embedding a number of benchmark circuits. Experimental results demonstrate that our method achieves full detection of single and multiple errors, while significantly improving the system availability with respect to traditional error detection and correction methods.

  15. Anatomy of a portfolio optimizer under a limited budget constraint
    I. Deplano, G. Squillero, A. Tonda
    EVOLUTIONARY INTELLIGENCE, 2016
    DOI: 10.1007/s12065-016-0144-3
    KEYWORDS: portfolio optimizationmulti-layer perceptronmulti-objective optimizationfinancial forecasting
    ABSTRACT: Predicting the market's behavior to profit from trading stocks is far from trivial. Such a task becomes even harder when investors do not have large amounts of money available, and thus cannot influence this complex system in any way. Machine learning paradigms have been already applied to financial forecasting, but usually with no restrictions on the size of the investor's budget. In this paper, we analyze an evolutionary portfolio optimizer for the management of limited budgets, dissecting each part of the framework, discussing in detail the issues and the motivations that led to the final choices. Expected returns are modeled resorting to artificial neural networks trained on past market data, and the portfolio composition is chosen by approximating the solution to a multi-objective constrained problem. An investment simulator is eventually used to measure the portfolio performance. The proposed approach is tested on real-world data from New York's, Milan's and Paris' stock exchanges, exploiting data from June 2011 to May 2014 to train the framework, and data from June 2014 to July 2015 to validate it. Experimental results demonstrate that the presented tool is able to obtain a more than satisfying profit for the considered time frame

  16. Divergence of character and premature convergence: A survey of methodologies for promoting diversity in evolutionary optimization
    G. Squillero, A. Tonda
    INFORMATION SCIENCES, 2016
    DOI: 10.1016/j.ins.2015.09.056
    KEYWORDS: diversity preservation; evolutionary optimization
    ABSTRACT: In the past decades, different evolutionary optimization methodologies have been proposed by scholars and exploited by practitioners, in a wide range of applications. Each paradigm shows distinctive features, typical advantages, and characteristic disadvantages; however, one single problem is shared by almost all of them: the "lack of speciation". While natural selection favors variations toward greater divergence, in artificial evolution candidate solutions do homologize. Many authors argued that promoting diversity would be beneficial in evolutionary optimization processes, and that it could help avoiding premature convergence on sub-optimal solutions. The paper surveys the research in this area up to mid 2010s, it re-orders and re-interprets different methodologies into a single framework, and proposes a novel three-axis taxonomy. Its goal is to provide the reader with a unifying view of the many contributions in this important corpus, allowing comparisons and informed choices. Characteristics of the different techniques are discussed, and similarities are highlighted; practical ways to measure and promote diversity are also suggested

  17. Exploiting Evolutionary Modeling to Prevail in Iterated Prisoner's Dilemma Tournaments
    M. Gaudesi, E. Piccolo, G. Squillero, A. Tonda
    IEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMES, 2016
    DOI: 10.1109/TCIAIG.2015.2439061
    KEYWORDS: games, sociology, statistics, computational modeling, mathematical model, adaptation models, game theory, opponent modeling, evolutionary algorithms, iterated prisoner's dilemma, non-deterministic finite state machine
    ABSTRACT: The iterated prisoner's dilemma is a famous model of cooperation and conflict in game theory. Its origin can be traced back to the Cold War, and countless strategies for playing it have been proposed so far, either designed by hand or automatically generated by computers. In the 2000s, scholars started focusing on adaptive players, that is, able to classify their opponent's behavior and adopt an effective counter-strategy. The player presented in this paper, pushes such idea even further: it builds a model of the current adversary from scratch, without relying on any pre-defined archetypes, and tweaks it as the game develops using an evolutionary algorithm; at the same time, it exploits the model to lead the game into the most favorable continuation. Models are compact non-deterministic finite state machines; they are extremely efficient in predicting opponents' replies, without being completely correct by necessity. Experimental results show that such player is able to win several one-to- one games against strong opponents taken from the literature, and that it consistently prevails in round-robin tournaments of different sizes

  18. Fast Hierarchical Key Management Scheme with Transitory Master Key for Wireless Sensor Networks
    F. Gandino, R. Ferrero, B. Montrucchio, M. Rebaudengo
    IEEE INTERNET OF THINGS JOURNAL, 2016
    DOI: 10.1109/JIOT.2016.2599641
    KEYWORDS: information systems; hardware and architecture; computer science applications1707 computer vision and pattern recognition; computer networks and communications; information systems and management; key management; symmetric encryption; transitory master key (mk); wireless sensor network (wsn); signal processing
    ABSTRACT: Symmetric encryption is the most widely adopted security solution for wireless sensor networks. The main open issue in this context is represented by the establishment of symmetric keys. Although many key management schemes have been proposed in order to guarantee a high security level, a solution without weaknesses does not yet exist. An important class of key management schemes is based on a transitory master key (MK). In this approach, a global secret is used during the initialization phase to generate pair-wise keys, and it is deleted during the working phase. However, if an adversary compromises a node before the deletion of the MK, the security of the whole network is compromised. In this paper, a new key negotiation routine is proposed. The new routine is integrated with a well-known key computation mechanism based on a transitory master secret. The goal of the proposed approach is to reduce the time required for the initialization phase, thus reducing the probability that the master secret is compromised. This goal is achieved by splitting the initialization phase in hierarchical subphases with an increasing level of security. An experimental analysis demonstrates that the proposed scheme provides a significant reduction in the time required before deleting the transitory secret material, thus increasing the overall security level. Moreover, the proposed scheme allows to add new nodes after the first deployment with a suited routine able to complete the key establishment in the same time as for the initial deployment.

  19. Identification and Rejuvenation of NBTI-Critical Logic Paths in Nanoscale Circuits
    M. Jenihhin, G. Squillero, T.S. Copetti, V. Tihhomirov, S. Kostin, M. Gaudesi, F. Vargas, J. Raik, M. Sonza Reorda, L.B. Poehls, R. Ubar, G.C. Medeiros
    JOURNAL OF ELECTRONIC TESTING, 2016
    DOI: 10.1007/s10836-016-5589-x
    KEYWORDS: hardware rejuvenation, aging, nbti, critical path identification, logic circuit, evolutionary computation, microgp, zamiacad
    ABSTRACT: The Negative Bias Temperature Instability (NBTI) phenomenon is agreed to be one of the main reliability concerns in nanoscale circuits. It increases the threshold voltage of pMOS transistors, thus, slows down signal propagation along logic paths between flip-flops. NBTI may cause intermittent faults and, ultimately, the circuit's permanent functional failures. In this paper, we propose an innovative NBTI mitigation approach by rejuvenating the nanoscale logic along NBTI-critical paths. The method is based on hierarchical identification of NBTI-critical paths and the generation of rejuvenation stimuli using an Evolutionary Algorithm. A new, fast, yet accurate model for computation of NBTI-induced delays at gate-level is developed. This model is based on intensive SPICE simulations of individual gates. The generated rejuvenation stimuli are used to drive those pMOS transistors to the recovery phase, which are the most critical for the NBTI-induced path delay. It is intended to apply the rejuvenation procedure to the circuit, as an execution overhead, periodically. Experimental results performed on a set of designs demonstrate reduction of NBTI-induced delays by up to two times with an execution overhead of 0.1 % or less. The proposed approach is aimed at extending the reliable lifetime of nanoelectronics

  20. Investigation of interference models for RFID systems
    L. Zhang, R. Ferrero, F. Gandino, M. Rebaudengo
    SENSORS, 2016
    DOI: 10.3390/s16020199
    KEYWORDS: rfid; interference model; reader-to-reader collision
    ABSTRACT: The reader-to-reader collision in an RFID system is a challenging problem for communications technology. In order to model the interference between RFID readers, different interference models have been proposed, mainly based on two approaches: single and additive interference. The former only considers the interference from one reader within a certain range, whereas the latter takes into account the sum of all of the simultaneous interferences in order to emulate a more realistic behavior. Although the difference between the two approaches has been theoretically analyzed in previous research, their effects on the estimated performance of the reader-to-reader anti-collision protocols have not yet been investigated. In this paper, the influence of the interference model on the anti-collision protocols is studied by simulating a representative state-of-the-art protocol. The results presented in this paper highlight that the use of additive models, although more computationally intensive, is mandatory to improve the performance of anti-collision protocols

  21. KITO tool: A fault injection environment in Linux kernel data structures
    A. Velasco, B. Montrucchio, M. Rebaudengo
    MICROELECTRONICS RELIABILITY, 2016
    DOI: 10.1016/j.microrel.2016.02.011
    KEYWORDS: fault injection; linux kernel; operating system dependability; safety-critical applications
    ABSTRACT: Transient faults in safety-critical computer-based systems represent a major issue for guaranteeing correct system behaviour. Fault injection is a commonly used method to evaluate the sensitivity of such systems. This paper presents a fault injection tool, called KITO, to evaluate the effects of faults in memory containing data structures belonging to a Unix-based Operating System and, in particular, elements linked to resource synchronization management. An experimental analysis was conducted on a large set of memory elements of the Operating System itself, while the system was subject to stress from benchmark programs that use different elements of the Linux kernel. Experimental results show that synchronization aspects of the kernel are susceptible to a significant set of possible errors ranging from performance degradation to failure in successfully completing the benchmark application.

  22. New Techniques to Reduce the Execution Time of Functional Test Programs
    M. Gaudesi, I. Pomeranz, M. Sonza Reorda, G. Squillero
    IEEE TRANSACTIONS ON COMPUTERS, 2016
    DOI: 10.1109/TC.2016.2643663
    KEYWORDS: test compaction; test generation; test program; software-based self-test
    ABSTRACT: The compaction of test programs for processor-based systems is of utmost practical importance: Software-Based Self-Test (SBST) is nowadays increasingly adopted, especially for in-field test of safety-critical applications, and both the size and the execution time of the test are critical parameters. However, while compacting the size of binary test sequences has been thoroughly studied over the years, the reduction of the execution time of test programs is still a rather unexplored area of research. This paper describes a family of algorithms able to automatically enhance an existing test program, reducing the time required to run it and, as a side effect, its size. The proposed solutions are based on instruction removal and restoration, which is shown to be computationally more efficient than instruction removal alone. Experimental results demonstrate the compaction capabilities, and allow analyzing computational costs and effectiveness of the different algorithms.

  23. OLT(RE)2: an On-Line on-demand Testing approach for permanent Radiation Effects in REconfigurable systems
    D. Cozzi, S. Korf, L. Cassano, J. Hagemeyer, A. Domenici, C. Bernardeschi, M. Porrmann, L. Sterpone
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2016
    DOI: 10.1109/TETC.2016.2586195
    KEYWORDS: permanent faults; on-line testing; reconfiguration; fpga; radiation effects
    ABSTRACT: Reconfigurable systems gained great interest in a wide range of application fields, including aerospace, where electronic devices are exposed to a very harsh working environment. Commercial SRAM-based FPGA devices represent an extremely interesting hardware platform for this kind of systems since they combine low cost with the possibility to utilize state-of-the-art processing power as well as the flexibility of reconfigurable hardware. In this paper we present OLT(RE)2: an on-line on-demand approach to test permanent faults induced by radiation in reconfigurable systems used in space missions. The proposed approach relies on a test circuit and on custom place-and-route algorithms. OLT(RE)2 exploits partial dynamic reconfigurability offered by today's SRAM-based FPGAs to place the test circuits at run-time. The goal of OLT(RE)2 is to test unprogrammed areas of the FPGA before using them, thus preventing functional modules of the reconfigurable system to be placed on areas with faulty resources. Experimental results have shown that (i) it is possible to generate, place and route the test circuits needed to detect on average more than 99 % of the physical wires and on average about 97 % of the programmable interconnection points of an arbitrary large region of the FPGA in a reasonable time and that (ii) it is possible to download and run the whole test suite on the target device without interfering with the normal functioning of the system.

  24. Observability solutions for in-field functional test of processor-based systems: a survey and quantitative test case evaluation
    J. Perez Acle, R. Cantoro, E. Sanchez, M. Sonza Reorda, G. Squillero
    MICROPROCESSORS AND MICROSYSTEMS, 2016
    DOI: 10.1016/j.micpro.2016.09.002
    KEYWORDS: functional test; software-based self-test; performance counters; fault simulation; observability
    ABSTRACT: The usage of electronic systems in safety-critical applications requires mechanisms for the early detection of faults affecting the hardware while the system is in the field. When the system includes a processor, one approach is to make use of functional test programs that are run by the processor itself. Such programs exercise the different parts of the system, and eventually expose the difference between a fully functional system and a faulty one. Their effectiveness depends, among other factors, on the mechanism adopted to observe the behavior of the system, which in turn is deeply affected by the constraints imposed by the application environment. This paper describes different mechanisms for supporting the observation of fault effects during such in-field functional test, and it reports and discusses the results of an experimental analysis performed on some representative case studies, which allow drawing some general conclusions. The gathered results allow the quantitative evaluation of the drop in fault coverage coming from the adoption of the alternative approaches with respect to the ideal case in which all the outputs can be continuously monitored, which is the typical scenario for test generation. The reader can thus better evaluate the advantages and disadvantages provided by each approach. As a major contribution, the paper shows that in the worst case the drop can be significant, while it can be minimized (without introducing any significant extra cost in terms of test generation and duration) through the adoption of a suitable observation mechanism, e.g., using Performance Counters possibly existing in the system. Suitable techniques to implement fault simulation campaigns to assess the effectiveness of different observation mechanisms are also described

  25. On the prediction of Radiation-induced SETs in Flash-based FPGAs
    S. Azimi, B. Du, L. Sterpone
    MICROELECTRONICS RELIABILITY, 2016
    KEYWORDS: fpgas, sets, seus, radiation effects
    ABSTRACT: The present work proposes a methodology to predict radiation-induced Single Event Transient (SET) phenomena within the silicon structure of Flash-based FPGA devices. The method is based on a MonteCarlo analysis, which allows to calculate the effective duration and amplitude of the SET once generated by the radiation strike. The method allows to effectively characterize the sensitivity of a circuit against the transient effect phenomenon. Experimental results provide a comparison between different radiation tests data, performed with different Linear Energy Transfer (LET) and the respective sensitiveness of SETs

  26. Online Test of Control Flow Errors: A New Debug Interface-Based Approach
    D. Boyang, M. Sonza Reorda, L. Sterpone, L. Parra, M. Portela-García, A. Lindoso, L. Entrena
    IEEE TRANSACTIONS ON COMPUTERS, 2016
    DOI: 10.1109/TC.2015.2456014
    KEYWORDS: online test, control flow checking
    ABSTRACT: Detecting the effects of transient faults is a key point in many processor-based safety-critical applications. This paper proposes to adopt the debug interface module existing today in several processors/controllers available on the market. In this way, we can achieve a good detection capability and small latency with respect to control flow errors, while the cost for adopting the proposed technique is rather limited and does not involve any change either in the processor hardware or in the application software. The method works even if the processor uses caches and we experimentally evaluated its characteristics demonstrating the advantages and showing the limitations on two pipelined processors. Experimental results performed by fault injection using different software applications demonstrate that the method is able to archieve high fault coverage (more than 95 percent in nearly all the considered cases) with a limited cost in terms of area and performance degradation

  27. Optimizing groups of colluding strong attackers in mobile urban communication networks with evolutionary algorithms
    D. Bucur, G. Iacca, M. Gaudesi, G. Squillero, A. Tonda
    APPLIED SOFT COMPUTING, 2016
    DOI: 10.1016/j.asoc.2015.11.024
    KEYWORDS: routing; network security; evolutionary algorithms; delay-tolerant network; cooperative co-evolution
    ABSTRACT: In novel forms of the Social Internet of Things, any mobile user within communication range may help routing messages for another user in the network. The resulting message delivery rate depends both on the users' mobility patterns and the message load in the network. This new type of configuration, however, poses new challenges to security, amongst them, assessing the effect that a group of colluding malicious participants can have on the global message delivery rate in such a network is far from trivial. In this work, after modeling such a question as an optimization problem, we are able to find quite interesting results by coupling a network simulator with an evolutionary algorithm. The chosen algorithm is specifically designed to solve problems whose solutions can be decomposed into parts sharing the same structure. We demonstrate the effectiveness of the proposed approach on two medium-sized Delay-Tolerant Networks, realistically simulated in the urban contexts of two cities with very different route topology: Venice and San Francisco. In all experiments, our methodology produces attack patterns that greatly lower network performance with respect to previous studies on the subject, as the evolutionary core is able to exploit the specific weaknesses of each target configuration

  28. Scan-Chain Intra-Cell Aware Testing
    A. Touati, A. Bosio, P. Girard, A. Virazel, P. Bernardi, M. Sonza Reorda, E. Auvray
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2016
    DOI: 10.1109/TETC.2016.2624311
    ABSTRACT: This paper first presents an evaluation of the effectiveness of different test pattern sets in terms of ability to detect possible intra-cell defects affecting the scan flip-flops. The analysis is then used to develop an effective test solution to improve the overall test quality. As a major result, the paper demonstrates that by combining test vectors generated by a commercial ATPG to detect stuck-at and delay faults, plus a fragment of extra test patterns generated to specifically target the escaped defects, we can obtain a higher intra-cell defect coverage (i.e., 6.46% on average) and a shorter test time (i.e., 42.20% on average) than by straightforwardly using an ATPG which directly targets these defects.

  29. Toner savings based on quasi-random sequences and a perceptual study for green printing
    M. Bartolomeo, F. Renato
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016
    DOI: 10.1109/TIP.2016.2552641
    KEYWORDS: green computing, toner savings, sobol' sequence
    ABSTRACT: Toner savings in monochromatic printing are an important target for improving green computing performance and more specifically green printing. In order to extend the lifetime of the printer cartridge some options are available for laser printers, usually reducing the number of dots with respect to the normal print quality. However available algorithms and patents do not provide a method for dynamically adapting the percentage of toner savings to the required printing quality. In this paper, we introduce a new quasi-random sequence based algorithm for reducing the number of dots in the printing process, able to achieve optimal discrepancy and low computational complexity, for all print quality levels. In order to reduce patterns in the removed dots, blue noise dithering is applied when the desired percentage of toner savings is moderate. The proposed solution can be easily implemented in the printer firmware, given its low computational complexity. In order to verify the results from a perceptual point of view, an extended test with 135 volunteers and more than 5000 comparisons has been performed, besides checking that toner is effectively saved. Results show that the proposed approach can produce a reduction of the perceived quality almost directly proportional to the number of monochromatic dots skipped, with only a reduced influence from the font used. The perceptual results are better in the proposal than in previous approaches. The proposed algorithm appears to be a promising technique for improving green printing in monochromatic laser printers without using custom fonts

  30. UA2TPG: An untestability analyzer and test pattern generator for SEUs in the configuration memory of SRAM-based FPGAs
    C. Bernardeschi, L. Cassano, A. Domenici, L. Sterpone
    INTEGRATION, 2016
    DOI: 10.1016/j.vlsi.2016.03.004
    KEYWORDS: single event upset; sram-based fpga; untestability analysis; model checking
    ABSTRACT: This paper presents UA2TPG, a static analysis tool for the untestability proof and automatic test pattern generation for SEUs in the configuration memory of SRAM-based FPGA systems. The tool is based on the model-checking verification technique. An accurate fault model for both logic components and routing structures is adopted. Experimental results show that many circuits have a significant number of untestable faults, and their detection enables more efficient test pattern generation and on-line testing. The tool is mainly intended to support on-line testing of critical components in FPGA fault-tolerant systems

  31. COTS-Based High-Performance Computing for Space Applications
    S. Esposito, C. Albanese, M. Alderighi, F. Casini, L. Giganti, M. Esposti, C. Monteleone, M. Violante
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2015
    DOI: 10.1109/TNS.2015.2492824
    KEYWORDS: commercial-off-the-shelf (cots); space applications; software implemented fault tolerance (sift); watchdog timer; watchdog processor; memory protection; memory encoding; fault injection; single event upset (seu); central processing unit (cpu)
    ABSTRACT: Commercial-off-the-shelf devices are often advocated as the only solution to the increasing performance requirements in space applications. This paper presents the solutions developed in the frame of the European Space Agencys HiRel program, concluded in December 2014, where a number of techniques proposed in the past 10 years have been used to design a highly reliable system, which has been selected for forthcoming space missions. This paper presents the system architecture, describes performed evaluations, and discusses the results

  32. Designing Autonomous Race Car Models for Learning Advanced Topics in Hard Real-Time System
    E. Bagalini, M. Violante
    INTERNATIONAL JOURNAL OF ROBOTICS APPLICATIONS AND TECHNOLOGIES, 2015
    DOI: 10.4018/IJRAT.2015010101
    KEYWORDS: closed-loop control, computer vision, embedded programming, the freescale cup, hard real-time, intelligent vehicle, model identification, smart car, autonomous vehicles
    ABSTRACT: The purpose of this chapter is to illustrate the design challenge that teams of graduate students are facing, and the solutions they devised, to develop autonomous vehicles for the worldwide speed race competition known as The Freescale Cup. The purpose of the competition is to design the fastest car able to complete an unknown racetrack autonomously. The challenge posed by the competition is an ideal teaching ground for engineering students that can improve with a working experience the theoretical aspects learned in class in different fields such as hard-real-time system theory, control theory, model-based design theory, and computer vision theory. Besides the technical aspects, as cars are developed by teams of students, the competition is also supporting the development of soft skills (such as team work, and communication skills) that are not normally addressed by classical curricula. The results achieved after three years of participation suggest that this approach is very effective in improving the quality of engineering curricula

  33. Development Flow for On-Line Core Self-Test of Automotive Microcontrollers
    P. Bernardi, R. Cantoro, S. De Luca, E. Sanchez, A. Sansonetti
    IEEE TRANSACTIONS ON COMPUTERS, 2015
    DOI: 10.1109/TC.2015.2498546
    KEYWORDS: microprocessors and microcomputers, reliability and testing, software-based self-test
    ABSTRACT: Software-Based Self-Test is an effective methodology for devising the on-line testing of Systems-on-Chip. In the automotive field, a set of test programs to be run during mission mode is also called Core Self-Test library. This paper introduces many new contributions: (1) it illustrates the several issues that need to be taken into account when generating test programs for on-line execution; (2) it proposed an overall development flow based on ordered generation of test programs that is minimizing the computational efforts; (3) it is providing guidelines for allowing the coexistence of the Core Self-Test library with the mission application while guaranteeing execution robustness. The proposed methodology has been experimented on a large industrial case study. The coverage level reached after one year of team work is over 87% of stuck-at fault coverage, and execution time is compliant with the ISO26262 specification. Experimental results suggest that alternative approaches may request excessive evaluation time thus making the generation flow unfeasible for large designs

  34. Layout and Radiation Tolerance Issues in High-Speed Links
    R. Giordano, A. Aloisio, V. Bocci, M. Capodiferro, V. Izzo, L. Sterpone, M. Violante
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2015
    DOI: 10.1109/TNS.2015.2498307

  35. On the Functional Test of Branch Prediction Units
    E. Sanchez, M. Sonza Reorda
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2015
    DOI: 10.1109/TVLSI.2014.2356612
    KEYWORDS: sbst; functional test; branch history table; branch prediction unit
    ABSTRACT: Branch prediction units (BPUs) are highly efficient modules that can significantly decrease the negative impact of branches in pipelined processors. Traditional test solutions, mainly based on Design for Testability techniques, are often inadequate to tackle specific test constraints, such as those found when incoming inspection or online test is considered. Following a functional approach based on running a suitable test program and checking the processor behavior may represent an alternative solution, provided that an effective test algorithm is available for the target unit. In this paper, a functional approach targeting the test of the BPU memory is proposed, which leads to the generation of suitable test programs whose effectiveness is independent of the specific implementation of the BPU. Two very common BPU architectures (branch history table and branch target buffer) are considered. The effectiveness of the approach is validated resorting to an open-source computer architectural simulator. Experimental results show that the proposed method is able to thoroughly test the BPU memory, allowing to transform whichever March algorithm into a corresponding test program; we also provide both theoretical and experimental proofs that the memory and execution time requirements grow linearly with the BPU size

  36. On-line Test of Control Flow Errors: A new Debug Interface-based approach
    B. Du, M. Sonza Reorda, L. Sterpone, L. Parra, M. Portela-Garcia, A. Lindoso, L. Entrena
    IEEE TRANSACTIONS ON COMPUTERS, 2015
    DOI: 10.1109/TC.2015.2456014

  37. Radiation-induced single event transients modeling and testing on nanometric flash-based technologies
    L. Sterpone,, B. Du, S. Azimi
    MICROELECTRONICS RELIABILITY, 2015
    DOI: 10.1016/j.microrel.2015.07.035
    KEYWORDS: nanometric; fpgas; radiation; heavy ions; fault tolerance; reliability
    ABSTRACT: The increasing technology node scaling makes VLSI devices extremely vulnerable to Single Event Effects (SEEs) induced by highly charged particles such as heavy ions, increasing the sensitivity to Single Event Transients (SETs). In this paper, we describe a new methodology combining an analytical and oriented model for analyzing the sensitivity of SET nanometric technologies. The paper includes radiation test experiments performed on Flash-based FPGAs using heavy ions radiation beam. Experimental results are detailed and commented demonstrating the effective mitigation capabilities thanks to the adoption of the developed model

  38. Using Benchmarks for Radiation Testing of Microprocessors and FPGAs
    W. Robinson, P. Rech, M. Aguirre, A. Barnard, M. Desogus, L. Entrena, M. Garcia-Valderas, S. Guertin, D. Kaeli, F. Kastensmidt, B. Kiddie, A. Sanchez-Clemente, M. Sonza Reorda, L. Sterpone, M. Wirthlin
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2015
    DOI: 10.1109/TNS.2015.2498313
    KEYWORDS: radiation testing, benchmarks, microprocessors, fpgas, fault tolerance
    ABSTRACT: Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. We describe the development process and report neutron test data for the hardware and software benchmarks

  39. Using Benchmarks for Radiation Testing of Microprocessors and FPGAs
    H. Quinn, W. Robinson, P. Rech, M. Aguirre, A. Barnard, M. Desogus, L. Entrena, M. Garcia-Valderas, S. Guertin, D. Kaeli, F. Kastensmidt, B. Kiddie, A. Sanchez-Clemente, M. Sonza Reorda, L. Sterpone, M. Wirthlin
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2015
    DOI: 10.1109/TNS.2015.2498313
    KEYWORDS: nuclear and high energy physics; nuclear energy and engineering; electrical and electronic engineering; software fault tolerance; soft errors; soft error rates; field-programmable gate arrays (fpgas)

  40. A Functional Approach for Testing the Reorder Buffer Memory
    S. Di Carlo, M. Gaudesi, E. Sanchez, M. Sonza Reorda
    JOURNAL OF ELECTRONIC TESTING, 2014
    DOI: 10.1007/s10836-014-5461-9
    KEYWORDS: digital system design test and verification; microprocessor testing; software-based self-test; embedded memory test; on-line test
    ABSTRACT: Superscalar processors may have the ability to execute instructions out-of-order to better exploit the internal hardware and to maximize the performance. To maintain the in-order instructions commitment and to guarantee the correctness of the final results (as well as precise exception management), the Reorder Buffer (ROB) is used. From the architectural point of view, the ROB is a memory array of several thousands of bits that must be tested against hardware faults to ensure a correct behavior of the processor. Since it is deeply embedded within the microprocessor circuitry, the most straightforward approach to test the ROB is through Built-In Self-Test solutions, which are typically adopted by manufacturers for end-of-production test. However, these solutions may not always be used for the test during the operational phase (in-field test) which aims at detecting possible hardware faults arising when the electronic systems works in its target environment. In fact, these solutions require the usage of test infrastructures that may not be accessible and/or documented, or simply not usable during the operational phase. This paper proposes an alternative solution, based on a functional approach, in which the test is performed by forcing the processor to execute a specially written test program, and checking the behavior of the processor. This approach can be adopted for in-field test, e.g., at the power-on, power-off, or during the time slots unused by the system application. The method has been validated resorting to both an architectural and a memory fault simulator

  41. A New Hybrid Nonintrusive Error-Detection Technique Using Dual Control-Flow Monitoring
    L. Parra, A. Lindoso, M. Portela-Garcia, L. Entrena, B. Du, M. Sonza Reorda, L. Sterpone
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2014
    DOI: 10.1109/TNS.2014.2361953
    KEYWORDS: error detection; fault tolerance; control flow error detection; fault injection

  42. ASSESS: A Simulator of Soft Errors in the Configuration Memory of SRAM-based FPGAs
    C. Bernardeschi, L. Cassano, A. Domenici, L. Sterpone
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2014
    DOI: 10.1109/TCAD.2014.2329419
    KEYWORDS: simulations; fpgas; radiation; place and route
    ABSTRACT: In this paper a simulator of soft errors (SEUs) in the configuration memory of SRAM-based FPGAs is presented. The simulator, named ASSESS, adopts fault models for SEUs affecting the configuration bits controlling both logic and routing resources that have been demonstrated to be much more accurate than classical fault models adopted by academic and industrial fault simulators currently available. The simulator permits the propagation of faulty values to be traced in the circuit, thus allowing the analysis of the faulty circuit not only by observing its output, but also by studying fault activation and error propagation. ASSESS has been applied to several designs, including the miniMIPS microprocessor, chosen as a realistic test case to evaluate the capabilities of the simulator. The ASSESS simulations have been validated comparing their results with a fault injection campaign on circuits from the ITC'99 benchmark, resulting in an average error of only 0.1%

  43. Evaluating the radiation sensitivity of GPGPU caches: New algorithms and experimental results
    D. Sabena, M. Sonza Reorda, L. Sterpone, P. Rech, L. Carro
    MICROELECTRONICS RELIABILITY, 2014
    DOI: 10.1016/j.microrel.2014.05.001
    KEYWORDS: null frames; gpgpu; seu; cache; radiation testing
    ABSTRACT: Given their high computational power, General Purpose Graphics Processing Units (GPGPUs) are increasingly adopted: GPGPUs have begun to be preferred to CPUs for several computationally intensive applications, not necessarily related to computer graphics. However, their sensitivity to radiation still requires to be fully evaluated. In this context, GPGPU data caches and shared memory have a key role since they allow to increase performance by sharing data between the parallel resources of a GPGPU and minimizing the memory accesses overhead. In this paper we present three new algorithms designed to support radiation experiments aimed at evaluating the radiation sensitivity of GPGPU data caches and shared memory. We also report the cross-section and Failure In Time results from neutron testing experiments performed on a commercial-off-the-shelf GPGPU using the proposed algorithms, with particular emphasis on the shared memory and on the L1 and L2 data caches

  44. Improving Colorwave with the probabilistic approach for reader-to-reader anti-collision TDMA protocols
    R. Ferrero, F. Gandino, B. Montrucchio, M. Rebaudengo
    WIRELESS NETWORKS, 2014
    DOI: 10.1007/s11276-013-0611-z
    KEYWORDS: rfid; reader-to-reader interference; probabilistic collision resolution; colorwave
    ABSTRACT: In RFID systems, wireless communication among readers and tags is subject to electromagnetic interference. In particular, when several readers work closely, forming so-called Dense Reader Environment (DRE), reader-to-reader collisions may occur. Several anti-collision protocols have been proposed in the literature to address this issue. Distributed Color Selection (DCS) and Colorwave are two effective state-of-the-art protocols, based on Time Division Multiple Access (TDMA). DCS provides great fairness, but it is not adaptable to changes in network topology, penalizing the throughput of the network. Colorwave is an enhanced version of DCS offering more flexibility. Moreover, a general probabilistic approach has been suggested for solving collisions in TDMA protocols and, in particular, it has been applied to DCS. In this work, the probabilistic method is implemented in the collision resolution routine of Colorwave and its effects are analyzed, confirming the validity of this mechanism for TDMA protocols. As proved by simulation results, the probabilistic approach can be adopted to improve throughput or fairness, without adding any other requirement

  45. In- and out-degree distributions of nodes and coverage in random sector graphs
    R. Ferrero, M. Bueno-Delgado, F. Gandino
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2014
    DOI: 10.1109/TWC.2014.031314.130905
    KEYWORDS: wireless sensor network; directional antenna; optical sensor network; connectivity; topology
    ABSTRACT: In a random sector graph, the presence of an edge between two nodes depends on their distance and spatial orientation. This kind of graph is widely used for modeling wireless sensor networks where communication among nodes is directional. In particular, it is applied to describe both the radio frequency transmission among nodes equipped with directional antennas and the line-of-sight transmission in optical sensor networks. Important properties of a wireless sensor network, such as connectivity and coverage, can be investigated by studying the degree of the nodes of the corresponding random sector graph. In detail, the in-degree value represents the number of incoming edges, whereas the out-degree considers the outgoing edges. This paper mathematically characterizes the average degree of a random sector graph and the probability distributions of the in-degree and out-degree of the nodes. Furthermore, it derives the coverage probability of the network. All the formulas are validated through extensive simulations, showing an excellent match between theoretical results and experimental data

  46. Increasing the Fault Coverage of Processor Devices during the Operational Phase Functional Test
    M. De Carvalho, P. Bernardi, E. Sanchez, M. Sonza Reorda, O. Ballan
    JOURNAL OF ELECTRONIC TESTING, 2014
    DOI: 10.1007/s10836-014-5457-5

  47. Key Management for Static Wireless Sensor Networks With Node Adding
    F. Gandino, B. Montrucchio, M. Rebaudengo
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2014
    DOI: 10.1109/TII.2013.2288063
    KEYWORDS: random key distribution; transitory master key; wireless sensor network (wsn); key management
    ABSTRACT: Wireless sensor networks offer benefits in several applications but are vulnerable to various security threats, such as eavesdropping and hardware tampering. In order to reach secure communications among nodes, many approaches employ symmetric encryption. Several key management schemes have been proposed in order to establish symmetric keys. The paper presents an innovative key management scheme called Random seed distribution with transitory master key (RSDTMK), which adopts the random distribution of secret material and a transitory master key used to generate pairwise keys. The proposed approach addresses the main drawbacks of the previous approaches based on these techniques. Moreover, it overperforms the state-of-the-art protocols by providing always a high security level

  48. MIHST: A Hardware Technique for Embedded Microprocessor Functional On-line Self-Test
    P. Bernardi, M. Lyl, E. Sanchez, M. Sonza Reorda
    IEEE TRANSACTIONS ON COMPUTERS, 2014
    DOI: 10.1109/TC.2013.165

  49. On the Automatic Generation of Optimized Software-Based Self-Test Programs for VLIW Processors
    D. Sabena, M. Sonza Reorda, L. Sterpone
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2014
    DOI: 10.1109/TVLSI.2013.2252636
    KEYWORDS: test program generation; vliw; software-based self-test
    ABSTRACT: Very Long Instruction Word (VLIW) processors are increasingly employed in a large range of embedded signal processing applications, mainly due to their ability to provide high performances with reduced clock rate and power consumption. At the same time, there is an increasing request for efficient and optimal test techniques able to detect permanent faults in VLIW processors. Software-Based Self-Test (SBST) methods are a consolidated and effective solution to detect faults into a processor both at the end of the production phase or during the operational life; however, when traditional SBST techniques are applied to VLIW processors, they may result to be ineffective (especially in terms of size and duration), due to their inabilitytoexploittheparallelismintrinsicinthesearchitectures. In this paper we present a new method for the automatic generation of efficient test programs specifically oriented to VLIW processors. The method starts from existing test programs based on generic SBST algorithms and automatically generates effective test programs able to reach the same fault coverage, while minimizing the test duration and the test code size. The method consists of four parametric phases and can deal with different VLIW processor models. The main goal of the paper is to show that in the case of VLIW processors it is possible to automatically generate an effective test program able to achieve high fault coverage with minimal test time and required resources. Experimental data gathered on a case study demonstrate the effectiveness of the proposed approach: results show that this method is able to exploit the intrinsic parallelism of the VLIW processor, taming the growth in size andduration of the test program when the processor size grows

  50. Performance analysis of reliable flooding in duty-cycle wireless sensor networks
    L. Zhang, R. Ferrero, E. Sanchez, M. Rebaudengo
    TRANSACTIONS ON EMERGING TELECOMMUNICATIONS TECHNOLOGIES, 2014
    DOI: 10.1002/ett.2556
    ABSTRACT: Wireless sensor network (WSN) is an emerging technology widely applied in modern applications. The resource limitations and the peculiarity of broadcast communication have made traditional flooding methods suffering severe performance degradation if directly applied to duty-cycle WSNs in which each node auto-activates for a brief interval and stays dormant most of time. In this work, a theoretical performance analysis of acknowledgement (ACK)-based and nonacknowledgement (NoACK)-based transmission mechanisms is presented. The evaluation considers both a point-to-point model and a point-to-multipoint one. Furthermore, the opportunistic flooding algorithm, which considers the effects of both duty cycle and unreliable wireless links of WSN, is implemented and evaluated considering both the ACK-based and NoACK-based transmission mechanisms. A solid framework is proposed in order to optimise the flooding in dutycycle WSNs according to the network requirements. Extensive simulations show that ACK-based and NoACK-based implementations produce a similar performance on the flooding delay, but with significantly different costs on the energy consumption

  51. Recovery Time and Fault Tolerance Improvement for Circuits mapped on SRAM-based FPGAs
    S. Ullah Anees
    JOURNAL OF ELECTRONIC TESTING, 2014
    DOI: 10.1007/s10836-014-5463-7
    KEYWORDS: triple modular redundancy (tmr); partial and dynamic reconfiguration; single event upset
    ABSTRACT: The rapid adoption of FPGA-based systems in space and avionics demands dependability rules from the design to the layout phases to protect against radiation effects. Triple Modular Redundancy is a widely used fault tolerance methodology to protect circuits against radiation-induced Single Event Upsets implemented on SRAM-based FPGAs. The accumulation of SEUs in the configuration memory can cause the TMR replicas to fail, requiring a periodic write-back of the configuration bit-stream. The associated system downtime due to scrubbing and the probability of simultaneous failures of two TMR domains are increasing with growing device densities. We propose a methodology to reduce the recovery time of TMR circuits with increased resilience to Cross-Domain Errors. Our methodology consists of an automated tool-flow for fine-grain error detection, error flags convergence and non-overlapping domain placement. The fine-grain error detection logic identifies the faulty domain using gate-level functions while the error flag convergence logic reduces the overwhelming number of flag signals. The non-overlapping placement enables selective domain reconfiguration and greatly reduces the number of Cross-Domain Errors. Our results demonstrate an evident reduction of the recovery time due to fast error detection time and selective partial reconfiguration of faulty domains. Moreover, the methodology drastically reduces Cross-Domain Errors in Look-Up Tables and routing resources. The improvements in recovery time and fault tolerance are achieved at an area overhead of a single LUT per majority voter in TMR circuits

  52. Reliability Evaluation of Embedded GPGPUs for Safety Critical Applications
    D. Sabena, L. Sterpone, L. Carro, P. Rech
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2014
    DOI: 10.1109/TNS.2014.2363358
    KEYWORDS: single-event effects; radiation testing; gpgpu
    ABSTRACT: Thanks to the capability of efficiently executing massive computations in parallel, General Purpose Graphic Processing Units (GPGPUs) have begun to be preferred to CPUs for several parallel applications in different domains. Two are the most relevant fields in which, recently, GPGPUs have begun to be employed: High Performance Computing (HPC), and embedded systems. The reliability requirements are different in these two applications domain. In order to be employed in safety-critical applications, GPGPUs for embedded systems must be qualified as reliable. In this paper, we analyze through neutron irradiation typical parallel algorithms for embedded GPGPUs and we evaluate their reliability. We analyze how caches and threads distributions affect the GPGPU reliability. The data have been acquired through neutron test experiments, performed at the VESUVIO neutron facility at ISIS. The obtained experimental results show that, if the L1 cache of the considered GPGPU is disabled, the algorithm execution is most reliable. Moreover, it is demonstrated that during a FFT execution most errors appear in the stages in which the GPGPU is completely loaded as the number of instantiated parallel tasks is higher

  53. Software-Based Hardening Strategies for Neutron Sensitive FFT Algorithms on GPUs
    L. Pilla, P. Rech, F. Silvestri, C. Frost, P. Navaux, M. Sonza, L. Carro
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2014
    DOI: 10.1109/TNS.2014.2301768
    ABSTRACT: In this paper we assess the neutron sensitivity of Graphics Processing Units (GPUs) when executing a Fast Fourier Transform (FFT) algorithm, and propose specific software-based hardening strategies to reduce its failure rate. Our research is motivated by experimental results with an unhardened FFT that demonstrate a majority of multiple errors in the output in the case of failures, which are caused by data dependencies. In addition, the use of the built-in error-correction code (ECC) showed a large overhead, and proved to be insufficient to provide high reliability. Experimental results with the hardened algorithm show a two orders of magnitude failure rate improvement over the original algorithm (one order of magnitude over ECC) and an overhead 64% smaller than ECC

  54. The impact of topology on energy consumption for collection tree protocols: An experimental assessment through evolutionary computation
    D. Bucur, G. Iacca, G. Squillero, A. Tonda
    APPLIED SOFT COMPUTING, 2014
    DOI: 10.1016/j.asoc.2013.12.002
    KEYWORDS: collection tree protocol; multihoplqi; wireless sensor networks; evolutionary algorithms; routing protocols; verification; energy consumption
    ABSTRACT: The analysis of worst-case behavior in wireless sensor networks is an extremely difficult task, due to the complex interactions that characterize the dynamics of these systems. In this paper, we present a new methodology for analyzing the performance of routing protocols used in such networks. The approach exploits a stochastic optimization technique, specifically an evolutionary algorithm, to generate a large, yet tractable, set of critical network topologies; such topologies are then used to infer general considerations on the behaviors under analysis. As a case study, we focused on the energy consumption of two well-known ad hoc routing protocols for sensor networks: the multi-hop link quality indicator and the collection tree protocol. The evolutionary algorithm started from a set of randomly generated topologies and iteratively enhanced them, maximizing a measure of "how interesting" such topologies are with respect to the analysis. In the second step, starting from the gathered evidence, we were able to define concrete, protocol-independent topological metrics which correlate well with protocols' poor performances. Finally, we discovered a causal relation between the presence of cycles in a disconnected network, and abnormal network traffic. Such creative processes were made possible by the availability of a set of meaningful topology examples. Both the proposed methodology and the specific results presented here - that is, the new topological metrics and the causal explanation - can be fruitfully reused in different contexts, even beyond wireless sensor networks

  55. A Geometric Distribution Reader Anti-collision protocol for RFID Dense Reader Environments
    M. Bueno-Delgado, R. Ferrero, F. Gandino, P. Pavon-Marino, M. Rebaudengo
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2013
    DOI: 10.1109/TASE.2012.2218101
    KEYWORDS: rfid; reader collision problems; sift geometric distribution function; epcglobal; etsi en 302 208
    ABSTRACT: Dense passive radio frequency identification (RFID) systems are particularly susceptible to reader collision problems, categorized by reader-to-tag and reader-to-reader collisions. Both may degrade the system performance decreasing the number of identified tags per time unit. Although many proposals have been suggested to avoid or handle these collisions, most of them are not compatible with current standards and regulations, require extra hardware and do not make an efficient use of the network resources. This paper proposes the Geometric Distribution Reader Anti-collision (GDRA), a new centralized scheduler that exploits the Sift geometric probability distribution function to minimize reader collision problems. GDRA provides higher throughput than the state-of-the-art proposals for dense reader environments and, unlike the majority of previous works, GDRA is compliant with the EPCglobal standard and ETSI EN 302 208 regulation, and can be implemented in real RFID systems without extra hardware

  56. A Novel Fault Tolerant and Run-Time Reconfigurable Platform for Satellite Payload Processing
    L. Sterpone, M. Porrmann, J. Hagemeyer
    IEEE TRANSACTIONS ON COMPUTERS, 2013
    DOI: 10.1109/TC.2013.80

  57. DCNS: an Adaptable High Throughput RFID Reader-to-Reader Anti-collision Protocol
    F. Gandino, R. Ferrero, B. Montrucchio, M. Rebaudengo
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2013
    DOI: 10.1109/TPDS.2012.208
    KEYWORDS: tdma; reader-to-reader collision; rfid
    ABSTRACT: The reader-to-reader collision problem represents a research topic of great recent interest for the radio frequency identification (RFID) technology. Among the state-of-the-art anticollision protocols, the ones that provide high throughput often have special requirements, such as extra hardware. This study investigates new high throughput solutions for static RFID networks without additional requirements. In this paper, two contributions are presented: a new configuration, called Killer, and a new protocol, called distributed color noncooperative selection (DCNS). The proposed configuration generates selfish behavior, thereby increasing channel utilization and throughput. DCNS fully exploits the Killer configuration and provides new features, such as dynamic priority management, which modifies the performance of the RFID readers when it is requested. Simulations have been conducted in order to analyze the effects of the innovations proposed. The proposed approach is especially suitable for low-cost applications with a priority not uniformly distributed among readers. The experimental analysis has shown that DCNS provides a greater throughput than the state-of-the-art protocols, even those with additional requirements (e.g., 16 percent better than NFRA)

  58. Evaluation of Single and Additive Interference Models for RFID Collisions
    L. Zhang, F. Gandino, R. Ferrero, M. Rebaudengo
    MATHEMATICAL AND COMPUTER MODELLING, 2013
    DOI: 10.1016/j.mcm.2013.01.011
    KEYWORDS: rfid; interference models; reader-to-reader collision
    ABSTRACT: RFID readers for passive tags suffer from reader-to-reader interference. Mathematical models of reader-to-reader interference can be categorized into single interference and additive interference models. Although it considers only the direct collisions between two readers, the single interference model is commonly adopted since it allows faster simulations. However, the additive interference model is more realistic since it captures the total interference from several readers. In this paper, an analysis of the two models is presented and a comparison between them is conducted according to several evaluation scenarios. Besides, the impact of the different parameters, including path loss exponent, SIR/SINR threshold and noise power, is evaluated for the two models

  59. Fast Power Evaluation for Effective Generation of Test Programs Maximizing Peak Power Consumption
    P. Bernardi, M. De Carvalho, E. Sanchez, M. Sonza Reorda, A. Bosio, L. Dilillo, M. Valka, P. Girard
    JOURNAL OF LOW POWER ELECTRONICS, 2013
    DOI: 10.1166/jolpe.2013.1259

  60. IEEE Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems Guest Editorial
    D. Prashant, M. Violante
    JOURNAL OF ELECTRONIC TESTING, 2013
    DOI: 10.1007/s10836-013-5390-z

  61. Power Consumption Versus Configuration SEUs in Xilinx Virtex-5 FPGAs
    A. Aloisio, V. Bocci, R. Giordano, V. Izzo, L. Sterpone, M. Violante
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2013
    DOI: 10.1109/TNS.2013.2273001

  62. SEL-UP: A CAD tool for the sensitivity analysis of radiation-induced Single Event Latch-Up
    L. Sterpone
    MICROELECTRONICS RELIABILITY, 2013
    DOI: 10.1016/j.microrel.2013.07.104
    ABSTRACT: Space missions require extremely high reliable components that must guarantee correct functionality without incurring in catastrophic effects. When electronic devices are adopted in space applications, radiation hardened technology should be mandatorily adopted. In this paper we propose a novel method for analyzing the sensitivity with respect to Single Event Latch-up (SEL) in radiation hardened technology. Experimental results obtained comparing heavy-ion beam campaign demonstrated the feasibility of the proposed solution

  63. Trade-off between maximum cardinality of collision sets and accuracy of RFID reader-to-reader collision detection
    L. Zhang, F. Gandino, R. Ferrero, B. Montrucchio, M. Rebaudengo
    EURASIP JOURNAL ON EMBEDDED SYSTEMS, 2013
    DOI: 10.1186/1687-3963-2013-10
    ABSTRACT: As the adoption of the radio-frequency identification (RFID) technology is increasing, many applications require a dense reader deployment. In such environments, reader-to-reader interference becomes a critical problem, so the proposal of effective anti-collision algorithms and their analysis are particularly important. Existing reader-to-reader anti-collision algorithms are typically analyzed using single interference models that consider only direct collisions. The additive interference models, which consider the sum of interferences, are more accurate but require more computational effort. The goal of this paper is to find the difference in accuracy between single and additive interference models and how many interference components should be considered in additive models. An in-depth analysis evaluates to which extent the number of the additive components in a possible collision affects the accuracy of collision detection. The results of the investigation shows that an analysis limited to direct collisions cannot reach a satisfactory accuracy, but the collisions generated by the addition of the interferences from a large number of readers do not affect significantly the detection of RFID reader-to-reader collisions

  64. A benchmark for cooperative coevolution
    A. Tonda, E. Lutton, G. Squillero
    MEMETIC COMPUTING, 2012
    DOI: 10.1007/s12293-012-0095-x
    ABSTRACT: Cooperative co-evolution algorithms (CCEA) are a thriving sub-field of evolutionary computation. This class of algorithms makes it possible to exploit more efficiently the artificial Darwinist scheme, as soon as an optimisation problem can be turned into a co-evolution of interdependent sub-parts of the searched solution. Testing the efficiency of new CCEA concepts, however, it is not straightforward: while there is a rich literature of benchmarks for more traditional evolutionary techniques, the same does not hold true for this relatively new paradigm. We present a benchmark problem designed to study the behavior and performance of CCEAs, modeling a search for the optimal placement of a set of lamps inside a room. The relative complexity of the problem can be adjusted by operating on a single parameter. The fitness function is a trade-off between conflicting objectives, so the performance of an algorithm can be examined by making use of different metrics. We show how three different cooperative strategies, Parisian Evolution, Group Evolution and Allopatric Group Evolution, can be applied to the problem. Using a Classical Evolution approach as comparison, we analyse the behavior of each algorithm in detail, with respect to the size of the problem

  65. A fair and high throughput reader-to-reader anticollision protocol in dense RFID networks
    R. Ferrero, F. Gandino, B. Montrucchio, M. Rebaudengo
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2012
    DOI: 10.1109/TII.2011.2176742
    KEYWORDS: rfid; reader-to-reader collision; fairness
    ABSTRACT: Supply chain is a typical scenario of exploiting Radio Frequency Identification (RFID) technology. Its growing use in all the supply chain areas makes the presence of many close RFID readers more common. In such environment, interferences among readers are critical. Many protocols have been proposed to reduce reader-to-reader collisions. Experimental data showed that the Neighbor Friendly Reader Anticollision (NFRA) protocol [1] maximizes the network throughput. However, it does not take into account the delay between the request and the granting of query tags, causing delays for some readers. This paper proposes two approaches to increase the fairness and ensure a high throughput for each reader. A theoretical analysis, supported by experimental simulations, demonstrates the improvements achieved

  66. An adaptive low-cost tester architecture supporting embedded memory volume diagnosis
    P. Bernardi, L. Ciganda
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2012
    DOI: 10.1109/TIM.2011.2179822
    KEYWORDS: access protocols; adaptive algorithm; automatic test equipment; built-in self-test (bist); fault diagnosis; semiconductor device testing; system on a chip (soc); test data compression
    ABSTRACT: This paper describes the working principle and an implementation of a low-cost tester architecture supporting volume test and diagnosis of built-in self-test (BIST)-assisted embedded memory cores. The described tester architecture autonomously executes a diagnosis-oriented test program, adapting the stimuli at run-time, based on the collected test results. In order to effectively allow the tester architecture to interact with the devices under test with an acceptable time overhead, the approach exploits a special hardware module to manage the diagnostic process. Embedded static RAMs equipped with diagnostic BISTs and IEEE 1500 wrappers were selected as case study; experimental results show the feasibility of the approach when having a field-programmable gate array available on the tester and its effectiveness in terms of diagnosis time and required tester memory with respect to traditional testers executing diagnosis procedures by means of software running on the host computer

  67. An algorithmic and architectural study on Montgomery exponentiation in RNS
    F. Gandino, F. Lamberti, G. Paravati, J. Bajard, P. Montuschi
    IEEE TRANSACTIONS ON COMPUTERS, 2012
    DOI: 10.1109/TC.2012.84
    ABSTRACT: The modular exponentiation on large numbers is computationally intensive. An effective way for performing this operation consists in using Montgomery exponentiation in the Residue Number System (RNS). This paper presents an algorithmic and architectural study of such exponentiation approach. From the algorithmic point of view, new and state-of-the-art opportunities that come from the reorganization of operations and precomputations are considered. From the architectural perspective, the design opportunities offered by well-known computer arithmetic techniques are studied, with the aim of developing an efficient arithmetic cell architecture. Furthermore, since the use of efficient RNS bases with a low Hamming weight are being considered with ever more interest, four additional cell architectures specifically tailored to these bases are developed and the tradeoff between benefits and drawbacks is carefully explored. An overall comparison among all the considered algorithmic approaches and cell architectures is presented, with the aim of providing the reader with an extensive overview of the Montgomery exponentiation opportunities in RNS

  68. On the use of embedded debug features for permanent and transient fault resilience in microprocessors
    M. Portela-Garcia, M. Grosso, M. Gallardo-Campos, M. Sonza Reorda, L. Enterna, M. Garcia-Valderas, C. Lopez-Ongil
    MICROPROCESSORS AND MICROSYSTEMS, 2012
    DOI: 10.1016/j.micpro.2012.02.013
    KEYWORDS: microprocessor; on-line test; debug infrastructure; error detection
    ABSTRACT: Microprocessor-based systems are employed in an increasing number of applications where dependability is a major constraint. For this reason detecting faults arising during normal operation while introducing the least possible penalties is a main concern. Different forms of redundancy have been employed to ensure error-free behavior, while error detection mechanisms can be employed where some detection latency is tolerated. However, the high complexity and the low observability of microprocessors' internal resources make the identification of adequate on-line error detection strategies a very challenging task, which can be tackled at circuit or system level. Concerning system-level strategies, a common limitation is in the mechanism used to monitor program execution and then detect errors as soon as possible, so as to reduce their impact on the application. In this work, an on-line error detection approach based on the reuse of available debugging infrastructures is proposed. The approach can be applied to different system architectures profiting from the debug trace port available in most of current microprocessors to observe possible misbehaviors. Two microprocessors have been used to study the applicability of the solution, LEON3 and ARM7TDMI. Results show that the presented fault detection technique enhances observability and thus error detection abilities in microprocessor-based systems without requiring modifications on the core architecture

  69. Software-Based Testing for System Peripherals
    M. Grosso, W. Perez Holguin, E. Sanchez, M. Sonza Reorda, A. Tonda, J. Velasco Medina
    JOURNAL OF ELECTRONIC TESTING, 2012
    DOI: 10.1007/s10836-012-5287-2
    KEYWORDS: system peripheral functional testing dma controller interrupt controller sbst
    ABSTRACT: Software-based self-testing strategies have been mainly proposed to tackle microprocessor testing, but may also be applied to peripheral testing. However, testing system peripherals (e.g., DMA controllers, interrupt controllers, and internal counters) is a challenging task, since their observability and controllability are even more reduced when compared to microprocessors and to peripherals devoted to I/O communication (e.g., serial or parallel ports). In this paper an approach to develop functional tests for system peripherals is proposed. The presented methodology requires two correlated phases: module configuration and module operation. The first one prepares the peripheral to work in the different operation modes, whereas the second one is in charge of exciting the whole device and observing its behavior. We propose a methodology that guides the test engineer in building a compact set of test programs able to reach high structural fault coverage levels in a short time. Experimental results demonstrating the method effectiveness for two real-world case studies are finally reported

  70. A Low-Cost Solution for Deploying Processor Cores in Harsh Environments
    M. Sonza Reorda, M. Violante, C. Meinhardt, R. Reis
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2011
    DOI: 10.1109/TIE.2011.2134054

  71. A Parallel Tester Architecture for Accelerometerand Gyroscope MEMS Calibration and Test
    L. Ciganda, P. Bernardi, M. Sonza Reorda, D. Barbieri, L. Bonaria, R. Losco, L. Marcigot, M. Straiotto
    JOURNAL OF ELECTRONIC TESTING, 2011
    DOI: 10.1007/s10836-011-5210-2
    KEYWORDS: mems testing-calibration; accelerometer; gyroscope; automatic test system; ats; adaptive fpga-based ate architecture
    ABSTRACT: This paper describes a tester architecture for Accelerometer and Gyroscope Micro-ElectroMechanical System (MEMS) devices test and calibration, allowing increased parallelism rate and process accuracy. The proposed tester architecture tackles some critical issues related to MEMS testing, such as mitigating mechanical concerns that potentially impact on the equipment Mean Time Between Maintenance and guaranteeing a sufficient number of measurements in the time unit. The proposed strategy consists in an innovative and low cost tester resource partitioning that overcomes current limitations to multisite Accelerometer and Gyroscope MEMS testing. A tester prototype was implemented exploiting FPGAs; feasibility and effectiveness of the proposed methodology was demonstrated on commercial accelerometer and gyroscope MEMS devices

  72. An Analytical Model of the Propagation Induced Pulse Broadening (PIPB) Effects on Single Event Transient in Flash-based FPGAs
    L. Sterpone, N. Battezzati, F. Lima Kastensmidt, R. Chipana
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2011
    DOI: 10.1109/TNS.2011.2161886
    KEYWORDS: fpga; single event transient (set); fault injection; processor; static analysis

  73. Application-oriented SEU cross-section of aprocessor soft core for Atmel RHBD FPGAs
    N. Battezzati, F. Margaglia, M. Violante, F. Decuzzi, D. Merodio Codinachs, B. Bancelin
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2011
    DOI: 10.1109/TNS.2010.2103326

  74. Artificial evolution in computer aided design: from the optimization of parameters to the creation of assembly programs
    G. Squillero
    COMPUTING, 2011
    DOI: 10.1007/s00607-011-0157-9
    KEYWORDS: evolutionary computation. microprocessors; post-silicon verification; speed paths
    ABSTRACT: Evolutionary computation has been little, but steadily, used in the CAD community during the past 20 years. Nowadays, due to their overwhelming complexity, significant steps in the validation of microprocessors must be performed on silicon, i.e., running experiments on physical devices after tape-out. The scenario created new space for innovative heuristics. This paper shows a methodology based on an evolutionary algorithm that can be used to devise assembly programs suitable for a range of on-silicon activities. The paper describes how to take into account complex hardware characteristics and architectural details. The experimental evaluation performed on two high-end Intel microprocessors demonstrates the potentiality of this line of research

  75. Coping With the Obsolescence of Safety - or Mission-Critical Embedded Systems Using FPGAs
    H. Guzman-Miranda, L. Sterpone, M. Violante, M. Aguirre, M. Gutierrez-Rizo
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2011
    DOI: 10.1109/TIE.2010.2050291

  76. Functional Verification of DMA Controllers
    M. Grosso, H. Perez, D. Ravotto, E. Sanchez, M. Sonza Reorda, A. Tonda, J. Velasco Medina
    JOURNAL OF ELECTRONIC TESTING, 2011
    DOI: 10.1007/s10836-011-5219-6
    KEYWORDS: functional verification; peripheral; simulation; dma controller

  77. Increasing pattern recognition accuracy for chemical sensing by evolutionary based drift compensation
    S. Di Carlo, M. Falasconi, E. Sanchez, A. Scionti, G. Squillero, A. Tonda
    PATTERN RECOGNITION LETTERS, 2011
    DOI: 10.1016/j.patrec.2011.05.019
    KEYWORDS: electronic nose; chemical sensors; bioinformatics; classification systems; evolutionary strategy; sensor drift
    ABSTRACT: Artificial olfaction systems, which mimic human olfaction by using arrays of gas chemical sensors combined with pattern recognition methods, represent a potentially low-cost tool in many areas of industry such as perfumery, food and drink production, clinical diagnosis, health and safety, environmental monitoring and process control. However, successful applications of these systems are still largely limited to specialized laboratories. Sensor drift, i.e., the lack of a sensor's stability over time, still limits real in dustrial setups. This paper presents and discusses an evolutionary based adaptive drift-correction method designed to work with state-of-the-art classification systems. The proposed approach exploits a cutting-edge evolutionary strategy to iteratively tweak the coefficients of a linear transformation which can transparently correct raw sensors' measures thus mitigating the negative effects of the drift. The method learns the optimal correction strategy without the use of models or other hypotheses on the behavior of the physical chemical sensors

  78. Layout-Aware Multi-Cell Upsets Effects Analysis on TMR Circuits Implemented on SRAM-Based FPGAs
    L. Sterpone, M. Violante, A. Panariti, A. Bocquillo, F. Miller, N. Buard, A. Manuzzato, S. Gerardin, A. Paccagnella
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2011
    DOI: 10.1109/TNS.2011.2161887

  79. Probabilistic DCS: An RFID reader-to-reader anti-collision protocol
    F. Gandino, R. Ferrero, B. Montrucchio, M. Rebaudengo
    JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2011
    DOI: 10.1016/j.jnca.2010.04.007
    KEYWORDS: rfid; reader-to-reader collision
    ABSTRACT: The wide adoption of radio frequency identification (RFID) for applications requiring a large number of tags and readers makes critical the reader-to-reader collision problem. Various anti-collision protocols have been proposed, but the majority require considerable additional resources and costs. Distributed color system (DCS) is a state-of-the-art protocol based on time division, without noteworthy additional requirements. This paper presents the probabilistic DCS (PDCS) reader-to-reader anti-collision protocol which employs probabilistic collision resolution. Differently from previous time division protocols, PDCS allows multichannel transmissions, according to international RFID regulations. A theoretical analysis is provided in order to clearly identify the behavior of the additional parameter representing the probability. The proposed protocol maintains the features of DCS, achieving more efficiency. Theoretical analysis demonstrates that the number of reader-to-reader collisions after a slot change is decreased by over 30%. The simulation analysis validates the theoretical results, and shows that PDCS reaches better performance than state-of-the-art reader-to-reader anti-collision protocols

  80. A Framework for Automated Detection of Power-Related Software Errors in Industrial Verification Processes
    S. Gandini, W. Ruzzarin, E. Sanchez, G. Squillero, A. Tonda
    JOURNAL OF ELECTRONIC TESTING, 2010
    DOI: 10.1007/s10836-010-5184-5
    KEYWORDS: diagnostics; evolutionary algorithms; mobile phones; power consumption; software testing; testing tools
    ABSTRACT: The complexity of cell phones is continually increasing, with regards to both hardware and software parts. As many complex devices, their components are usually designed and verified separately by specialized teams of engineers and programmers. However, even if each isolated part is working flawlessly, it often happens that bugs in one software application arise due to the interaction with other modules. Those software misbehaviors become particularly critical when they affect the residual battery life, causing power dissipation. An automatic approach to detect power-affecting software defects is proposed. The approach is intended to be part of a qualifying verification plan and complete human expertise. Motorola, always at the forefront of researching innovations in the product development chain, experimented the approach on a mobile phone prototype during a partnership with Politecnico di Torino. Software errors unrevealed by all human-designed tests have been detected by the proposed framework, two out of three critical from the power consumption point of view, thus enabling Motorola to further improve its verification plans. Details of the tests and experimental results are presented

  81. A Hybrid Approach for Detection and Correction of Transient Faults in SoCs
    P. Bernardi, M. Grosso, L. Bolzani Poehls, M. Sonza Reorda
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2010
    DOI: 10.1109/TDSC.2010.33
    ABSTRACT: Critical applications based on Systems-on-Chip (SoCs) require suitable techniques that are able to ensure a sufficient level of reliability. Several techniques have been proposed to improve fault detection and correction capabilities of faults affecting SoCs. This paper proposes a hybrid approach able to detect and correct the effects of transient faults in SoC data memories and caches. The proposed solution combines some software modifications, which are easy to automate, with the introduction of a hardware module, which is independent of the specific application. The method is particularly suitable to fit in a typical SoC design flow and is shown to achieve a better trade-off between the achieved results and the required costs than corresponding purely hardware or software techniques. In fact, the proposed approach offers the same fault-detection and -correction capabilities as a purely software-based approach, while it introduces nearly the same low memory and performance overhead of a purely hardware-based one

  82. A new Timing Driven Placement Algorithm for Dependable Circuits on SRAM-based FPGAs
    L. Sterpone
    ACM TRANSACTIONS ON RECONFIGURABLE TECHNOLOGY AND SYSTEMS, 2010
    DOI: 10.1145/1857927.1857934
    KEYWORDS: hardware; arithmetic and logic structures; reliability; testing and fault tolerance

  83. Analysis of SET propagation in Flash-based FPGAs by means of electrical pulse injection
    L. Sterpone, N. Battezzati, V. Ferlet-Cavrois
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2010
    DOI: 10.1109/TNS.2010.2043686
    KEYWORDS: characterization; flash-based fpgas; single event transients (sets)
    ABSTRACT: Advanced digital circuits are increasingly sensitive to single event transients (SETs) phenomena. Technology scaling has resulted in a greater sensitivity to single event effects (SEEs) and more in particular to SET propagation, since transients may be generated and propagated through the circuit logic, leading to behavioral errors of the affected circuit. When circuits are implemented on Flash-based FPGAs, SETs generated in the combinational logic resources are the main source of critical behavior. In this paper, we developed a technique based on electrical pulse injection for the analysis of SETs propagation within logic resources of Flash-based FPGAs. We outline logic schematic that allows the injection of different SET pulses. We performed several experimental analyses. We characterized the basic logic gates used by circuits implemented on Flash-based FPGAs evaluating the effect on logic-chains of real lengths. Additionally, we performed an effective analysis evaluating the SET propagation through microprocessor logic paths. Results demonstrated the possibility of mitigating SET-broadening effects by acting on physical place and route constraints

  84. Boosting Software Fault Injection for Dependability Analysis of Real-Time Embedded Applications
    G. Cabodi, M. Murciano, M. Violante
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2010
    DOI: 10.1145/1880050.1880060
    ABSTRACT: The design of complex embedded systems deployed in safety-critical or mission-critical applications mandates the availability of methods to validate the system dependability across the whole design flow. In this article we introduce a fault injection approach, based on loadable kernel modules and running under the Linux operating system, which can be adopted as soon as a running prototype of the systems is available. Moreover, for the purpose of decoupling dependability analysis from hardware availability, we also propose the adoption of hardware virtualization. Extensive experimental results show that statistical analysis made on top of virtual prototypes are in good agreement with the information disclosed by fault detection trends of real platforms, even under real-time constraints

  85. Design Validation of Multithreaded Processors using Threads Evolution
    D. Ravotto, E. Sanchez, M. Sonza Reorda, G. Squillero
    JICS. JOURNAL OF INTEGRATED CIRCUITS AND SYSTEMS, 2010

  86. Evaluating the Impact of DFM Library Optimizations on Alpha-induced SEU Sensitivity in a Microprocessor Core
    D. Appello, M. Grosso, D. Loparco, F. Melchiori, A. Paccagnella, P. Rech, M. Sonza Reorda
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2010
    DOI: 10.1109/TNS.2010.2049119
    ABSTRACT: This paper presents and discusses the results of Alpha Single Event Upset (SEU) tests on an embedded 8051 microprocessor core implemented using three different standard cell libraries. Each library is based on a different Design for Manufacturability (DfM) optimization strategy; our goal is to understand how these strategies may affect the device sensitivity to alpha-induced Soft Errors. The three implementations are tested resorting to advanced Design for Testability (DfT) methodologies and radiation experiments results are compared. Electrical simulations of flip-flops are finally performed to propose physical motivations to the observed phenomena

  87. Exploiting an infrastructure-intellectual property for systems-on-chip test, diagnosis and silicon debug
    P. Bernardi, M. Grosso, M. Rebaudengo, M. Sonza Reorda
    IET COMPUTERS & DIGITAL TECHNIQUES, 2010
    DOI: 10.1049/iet-cdt.2008.0122

  88. Microprocessor Software-Based Self-Testing
    M. Psarakis, D. Gizopoulos, E. Sanchez, M. Sonza Reorda
    IEEE DESIGN & TEST OF COMPUTERS, 2010
    DOI: 10.1109/MDT.2010.5
    KEYWORDS: collaudo circuiti integrati vlsi
    ABSTRACT: This article discusses the potential role of software-based self-testing in the microprocessor test and validation process, as well as its supplementary role in other classic functional- and structural-test methods. In addition, the article proposes a taxonomy for different SBST methodologies according to their test program development philosophy, and summarizes research approaches based on SBST techniques for optimizing other key aspects

  89. Microvesicles Derived from Adult Human Bone Marrow and Tissue Specific Mesenchymal Stem Cells Shuttle Selected Pattern of miRNAs
    F. Collino, M. Deregibus, S. Bruno, L. Sterpone, G. Aghemo, L. Viltono, C. Tetta, G. Camussi
    PLOS ONE, 2010
    DOI: 10.1371/journal.pone.0011803

  90. Tampering in RFID: A Survey on Risks and Defenses
    F. Gandino, B. Montrucchio, M. Rebaudengo
    MOBILE NETWORKS AND APPLICATIONS, 2010
    DOI: 10.1007/s11036-009-0209-y
    ABSTRACT: RFID is a well-known pervasive technology, which provides promising opportunities for the implementation of new services and for the improvement of traditional ones. However, pervasive environments require strong efforts on all the aspects of information security. Notably, RFID passive tags are exposed to attacks, since strict limitations affect the security techniques for this technology. A critical threat for RFIDbased information systems is represented by data tampering, which corresponds to the malicious alteration of data recorded in the tag memory. The aim of this paper is to describe the characteristics and the effects of data tampering in RFID-based information systems, and to survey the approaches proposed by the research community to protect against it. The most important recent studies on privacy and security for RFID-based systems are examined, and the protection given against tampering is evaluated. This paper provides readers with an exhaustive overview on risks and defenses against data tampering, highlighting RFID weak spots and open issues

  91. A Novel Dual Core Architecture for the Analysis of DNA Microarray Images
    L. Sterpone
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2009
    DOI: 10.1109/TIM.2009.2015695

  92. Effective Diagnostic Pattern Generation Strategy forTransition-Delay Faults in Full-Scan SOCs
    D. Appello, P. Bernardi, M. Grosso, E. Sanchez, M. Sonza Reorda
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2009
    DOI: 10.1109/TVLSI.2008.2006177
    KEYWORDS: sbst; large-scale circuits; delay effects
    ABSTRACT: Abstract—Nanometric circuits and systems are increasingly susceptible to delay defects. This paper describes a strategy for the diagnosis of transition- delay faults in full-scan systems-on-a-chip (SOCs). The proposed methodology takes advantage of a suitably generated software-based self-test test set and of the scan-chains included in the final SOC design. Effectiveness and feasibility of the proposed approach were evaluated on a nanometric SOC test vehicle including an 8-bit microcontroller, some memory blocks and an arithmetic core, manufactured by STMicroelectronics. Results show that the proposed technique can achieve high diagnostic resolution while maintaining a reasonable application time

  93. Methodologies to study frequency-dependent Single Event Effects sensitivity in Flash-based FPGAs
    N. Battezzati, S. Gerardin, A. Manuzzato, D. Merodio, A. Paccagnella, C. Poivey, L. Sterpone, M. Violante
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2009
    DOI: 10.1109/TNS.2009.2034316

  94. New techniques for improving the performance of the lockstep architecture for SEEs mitigation in FPGA embedded processors
    F. Abate, L. Sterpone, C. Lisboa, L. Carro, M. Violante
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2009
    DOI: 10.1109/TNS.2009.2013237

  95. On Improving Automation by Integrating RFID in the Traceability Management of the Agri-Food Sector
    F. Gandino, B. Montrucchio, M. Rebaudengo, E. Sanchez
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2009
    DOI: 10.1109/TIE.2009.2019569
    ABSTRACT: Traceability is a key factor for the agri-food sector. RFID technology, widely adopted for supply chain management, can be used effectively for the traceability management. In this paper, a framework for the evaluation of a traceability system for the agri-food industry is presented and the automation level in an RFID-based traceability system is analyzed and compared with respect to traditional ones. Internal and external traceability are both considered and formalized, in order to classify different environments, according to their automation level. Traceability systems used in a sample sector are experimentally analyzed, showing that by using RFID technology, agri-food enterprises increase their automation level and also their efficiency, in a sustainable way

  96. Opportunity and Constraints for Wide Adoption of RFID in Agri-Food
    F. Gandino, E. Sanchez, B. Montrucchio, M. Rebaudengo
    INTERNATIONAL JOURNAL OF ADVANCED PERVASIVE AND UBIQUITOUS COMPUTING, 2009

  97. Test Program Generation for Communication Peripherals in Processor-Based Systems-on-Chip
    A. Apostolakis, D. Gizopoulos, M. Psarakis, D. Ravotto, M. Sonza Reorda
    IEEE DESIGN & TEST OF COMPUTERS, 2009
    DOI: 10.1109/MDT.2009.43
    ABSTRACT: Testing communication peripherals in an environment of systems on a chip is particularly challenging. The authors explore two test program generation approaches-one fully automated and one deterministically guided-and propose a novel combination of the two schemes that can be applied in a generic manner on a wide set of communication cores

  98. A New Mitigation Approach For Soft Errors In Embedded Processors
    F. Abate, L. Sterpone, M. Violante
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2008
    DOI: 10.1109/TNS.2008.2000839

  99. A new Algorithm for the Analysis of the MCUs Sensitiveness of TMR Architectures in SRAM-based FPGAs
    L. Sterpone, M. Violante
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2008
    DOI: 10.1109/TNS.2008.2001858

  100. An Effective technique for the Automatic Generation of Diagnosis-oriented Programs for Processor Cores
    P. Bernardi, E. Sanchez, M. Schillaci, G. Squillero, M. Sonza Reorda
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2008
    DOI: 10.1109/TCAD.2008.915541
    ABSTRACT: A large part of microprocessor cores in use today are de- signed to be cheap and mass produced. The diagnostic process, which is fundamental to improve yield, has to be as cost effective as possible. This paper presents a novel approach to the construction of diagnosis-oriented software-based test sets for microprocessors. The methodology exploits existing manufacturing test sets designed for software-based self-test and improves them by using a new diagnosis-oriented approach. Experimental results are reported in this paper showing the feasibility, robustness, and effectiveness of the approach for diagnosing stuck-at faults on an Intel i8051 processor core

  101. Effectiveness of TMR-based techniques to mitigate alpha-induced SEU accumulation in commercial SRAM-based FPGAs
    A. Manuzzato, S. Gerardin, A. Paccagnella, L. Sterpone, M. Violante
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2008
    DOI: 10.1109/TNS.2008.2000850

  102. Hardware and Software Transparency in the Protection of Programs Against SEUs and SETs
    E.L. Rhod, Carlos Arthur Lang Lisbôa, L. Carro, M. Sonza Reorda, M. Violante
    JOURNAL OF ELECTRONIC TESTING, 2008

  103. Monte Carlo Analysis of the Effects of Soft Errors Accumulation in SRAM-based FPGAs
    N. Battezzati, L. Sterpone, M. Violante
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2008
    DOI: 10.1109/TNS.2008.2006839

  104. Soft Errors in SRAM-FPGAs: a Comparison of Two Complementary Approaches
    M. Alderighi, F. Casini, S. D'Angelo, M. Mancini, S. Pastore, L. Sterpone, M. Violante
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2008
    DOI: 10.1109/TNS.2008.2000479

  105. Software and Hardware Techniques for SEU Detection in IP Processors
    C. Bolchini, A. Miele, M. Rebaudengo, F. Salice, L. Sterpone, M. Violante
    JOURNAL OF ELECTRONIC TESTING, 2008
    DOI: 10.1007/s10836-007-5028-0

  106. A New Partial Reconfiguration-based Fault-Injection System to Evaluate SEU Effects in SRAM-based FPGAs
    L. Sterpone, M. Violante
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2007
    DOI: 10.1109/TNS.2007.904080

  107. A System-layer Infrastructure for SoC Diagnosis
    P. Bernardi, M. Grosso, M. Rebaudengo, M. Sonza Reorda
    JOURNAL OF ELECTRONIC TESTING, 2007

  108. A new approach to estimate the effect of single event transients in complex circuits
    M. Aguirre, V. Baena, J. Tombs, M. Violante
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2007
    DOI: 10.1109/TNS.2007.895549

  109. A new hardware/software platform and a new 1/E neutron source for soft error studies: testing FPGAs at the ISIS facility
    M. Violante, L. Sterpone, A. Manuzzato, S. Gerardin, P. Rech, M. Bagatin, A. Paccagnella, C. Andreani, G. Gorini, A. Pietropaolo, G. Cardarilli, S. Pontarelli, C. Frost
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2007
    DOI: 10.1109/TNS.2007.902349

  110. Evaluating different solutions to design fault tolerant systems with SRAM-based FPGAs
    M. Sonza Reorda, L. Sterpone, M. Violante, F. Lima Kastensmidt, L. Carro
    JOURNAL OF ELECTRONIC TESTING, 2007
    DOI: 10.1007/s10836-006-0403-9

  111. Experimental Validation of a Tool for Predicting the Effects of Soft Errors in SRAM-based FPGAs
    L. Sterpone, M. Violante, R. Harboe Sorensen, D. Merodio, F. Sturesson, R. Weigand, S. Mattsson
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2007
    DOI: 10.1109/TNS.2007.910122

  112. A new hybrid fault detection technique for systems-on-a-chip
    P. Bernardi, Veiras Bolzani Lm., M. Rebaudengo, M. Sonza Reorda, F. Vargas, M. Violante
    IEEE TRANSACTIONS ON COMPUTERS, 2006
    DOI: 10.1109/TC.2006.15
    ABSTRACT: Hardening SoCs against transient faults requires new techniques able to combine high fault detection capabilities with the usual requirements of SoC design flow, e.g., reduced design-time, low area overhead, and reduced (or null) accessibility to source core descriptions. This paper proposes a new hybrid approach which combines hardening software transformations with the introduction of an Infrastructure IP with reduced memory and performance overheads. The proposed approach targets faults affecting the memory elements storing both the code and the data, independently of their location (inside or outside the processor). Extensive experimental results, including comparisons with previous approaches, are reported, which allow practically evaluating the characteristics of the method in terms of fault detection capabilities and area, memory, and performance overheads

  113. A new reliability-oriented place and route algorithm for SRAM-based FPGAs
    L. Sterpone, M. Violante
    IEEE TRANSACTIONS ON COMPUTERS, 2006
    DOI: 10.1109/TC.2006.82
    ABSTRACT: The very high integration levels reached by VLSI technologies for SRAM-based Field Programmable Gate Arrays (FPGAs) lead to high occurrence-rate of transient faults induced by Single Event Upsets (SEUs) in FPGAs' configuration memory. Since the configuration memory defines which circuit an SRAM-based FPGA implements, any modification induced by SEUs may dramatically change the implemented circuit. When such devices are used in safety-critical applications, fault-tolerant techniques are needed to mitigate the effects of SEUs in FPGAs' configuration memory. In this paper, we analyze the effects induced by the SEUs in the configuration memory of SRAM-based FPGAs. The reported analysis outlines that SEUs in the FPGA's configuration memory are particularly critical since they are able to escape well-known fault masking techniques such as Triple Modular Redundancy (TMR). We then present a reliability-oriented place and route algorithm that, coupled with TMR, is able to effectively mitigate the effects of the considered faults. The effectiveness of the new reliability-oriented place and route algorithm is demonstrated by extensive fault injection experiments showing that the capability of tolerating SEU effects in the FPGA's configuration memory increases up to 85 times with respect to a standard TMR design technique

  114. An Analysis based on Fault Injection of Hardening Techniques for SRAM-based FPGAs
    L. Sterpone, M. Violante, S. Rezgui
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2006
    DOI: 10.1109/TNS.2006.880937

  115. Early, Accurate Dependability Analysis of CAN-Based Networked Systems
    J. Perez, M. Sonza Reorda, M. Violante
    IEEE DESIGN & TEST OF COMPUTERS, 2006
    DOI: 10.1109/MDT.2006.10
    ABSTRACT: For safety-critical applications, accurately evaluating network dependability is crucial. This article, a special selection from the Symposium on Integrated Circuits and Systems Design (SBCCI), describes a fault-injection environment for assessing CAN-based networks common in the automotive field. The approach focuses particularly on revealing how soft errors within the CAN protocol controllers affect system behavior

  116. Efficient Techniques for Automatic Verification-Oriented Test Set Optimization
    E. Sanchez, M. Sonza Reorda, G. Squillero
    INTERNATIONAL JOURNAL OF PARALLEL PROGRAMMING, 2006
    DOI: 10.1007/s10766-005-0005-7
    ABSTRACT: Most Systems-on-a-Chips include a custom microprocessor core, and time and resource constraints make the design of such devices a challenging task. This paper presents a simulation-based methodology for the automatic completion and refinement of verification test sets. The approach extends the µGP, an evolutionary test program generator, with the possibility to enhance existing test sets. Already devised test programs are not merely included in the new set, but assimilated and used as a starting point for a new test-program cultivation task. Reusing existing material cuts down the time required to generate a verification test set during the microprocessor design. Experimental results are reported on a small pipelined microprocessor, and show the effectiveness of the approach. Additionally, the use of the proposed methodology enabled to experimentally analyze the relationship of the different code coverage metrics used in the test program generation

  117. HYBRID FAULT DETECTION TECHNIQUE A CASE STUDY ON VIRTEX-II PRO'S POWERPC
    P. Bernardi, L. Sterpone, M. Violante, M. Portela-Garcia
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2006
    DOI: 10.1109/TNS.2006.886221

  118. Hardening FPGA-based systems against SEUs: A new design methodology
    L. Sterpone, M. Violante
    JOURNAL OF COMPUTERS, 2006

  119. System-in-package testing: problems and solutions
    D. Appello, P. Bernardi, M. Grosso, M. Sonza Reorda
    IEEE DESIGN & TEST OF COMPUTERS, 2006
    DOI: 10.1109/MDT.2006.79
    ABSTRACT: System in package integrates multiple dies in a common package. Therefore, testing SiP technology is different from system on chip, which integrates multiple vendor parts. This article provides test strategies for known good die and known good substrate in the SiP. Case studies prove feasibility using the IEEE 1500 test structure

  120. A New Analytical Approach to Estimate the Effects of SEUs in TMR Architectures Implemented Through SRAM-based FPGAs
    L. Sterpone, M. Violante
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2005
    DOI: 10.1109/TNS.2005.860745
    ABSTRACT: In order to deploy successfully commercially-off-the- shelf SRAM-based FPGA devices in safety- or mission-critical ap- plications, designers need to adopt suitable hardening techniques, as well as methods for validating the correctness of the obtained designs, as far as the system's dependability is concerned. In this paper we describe a new analytical approach to estimate the de- pendability of TMR designs implemented on SRAM-based FPGAs that, by exploiting a detailed knowledge of FPGAs architectures and configuration memory, is able to predict the effects of single event upsets with the same accuracy of fault injection but at a frac- tion of the fault-injection's execution time

  121. Analysis of the robustness of the TMR architecture in SRAM-based FPGAs
    L. Sterpone, M. Violante
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2005
    DOI: 10.1109/TNS.2005.856543
    ABSTRACT: Non radiation-hardened SRAM-based Field Pro- grammable Gate Arrays (FPGAs) are very sensitive to Single Event Upsets (SEUs) affecting their configuration memory and thus suitable hardening techniques are needed when they are intended to be deployed in critical applications. Triple Module Re- dundancy is a known solution for hardening digital logic against SEUs that is widely adopted for traditional techniques (like ASICs). In this paper we present an analysis of the SEU effects in circuits hardened according to the Triple Module Redundancy to investigate the possibilities of successfully applying TMR to designs mapped on commercial-off-the-shelf SRAM-based FPGAs, which are not radiation hardened. We performed dif- ferent fault-injection experiments in the FPGA configuration memory implementing TMR designs and we observed that the percentage of SEUs escaping TMR could reach 13%. In this paper we report detailed evaluations of the effects of the observed failure rates, and we proposed a first step toward an improved TMR implementation

  122. Automatic Test Generation for Verifying Microprocessors
    F. Corno, E. Sanchez, M. Sonza Reorda, G. Squillero
    IEEE POTENTIALS, 2005
    DOI: 10.1109/MP.2005.1405800

  123. Evolving assembly programs: how games help microprocessor validation
    F. Corno, E. Sanchez, G. Squillero
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2005
    DOI: 10.1109/TEVC.2005.856207
    ABSTRACT: Core War is a game where two or more programs, called warriors, are executed in the same memory area by a time-sharing processor. The final goal of each warrior is to crash the others by overwriting them with illegal instructions. The game was popularized by A. K. Dewdney in his Scientific American column in the mid-1980s. In order to automatically devise strong warriors, MicroGP, a test program generation algorithm, was extended with the ability to assimilate existing code and to detect clones; furthermore, a new selection mechanism for promoting diversity independent from fitness calculations was added. The evolved warriors are the first machine-written programs ever able to become King of the Hill (champion) in all four main international Tiny Hills. This paper shows how playing Core War may help generate effective test programs for validation and test of microprocessors. Tackling a more mundane problem, the described techniques are currently being exploited for the automatic completion and refinement of existing test programs. Preliminary experimental results are reported

  124. MicroGP - An Evolutionary Assembly Program Generator
    G. Squillero
    GENETIC PROGRAMMING AND EVOLVABLE MACHINES, 2005
    DOI: 10.1007/s10710-005-2985-x
    ABSTRACT: This paper describes µGP, an evolutionary approach for generating assembly programs tuned for a specific microprocessor. The approach is based on three clearly separated blocks: an evolutionary core, an instruction library and an external evaluator. The evolutionary core conducts adaptive population-based search. The instruction library is used to map individuals to valid assembly language programs. The external evaluator simulates the assembly program, providing the necessary feedback to the evolutionary core. µGP has some distinctive features that allow its use in specific contexts. This paper focuses on one such context: test program generation for design validation of microprocessors. Reported results show µGP being used to validate a complex 5-stage pipelined microprocessor. Its induced test programs outperform an exhaustive functional test and an instruction randomizer, showing that engineers are able to automatically obtain high-quality test programs

  125. A BIST-based Solution for the Diagnosis of Embedded Memories Adopting Image Processing Techniques
    D. Appello, A. Fudoli, V. Tancorre, P. Bernardi, F. Corno, M. Rebaudengo, M. Sonza Reorda
    JOURNAL OF ELECTRONIC TESTING, 2004
    DOI: 10.1023/B:JETT.0000009315.57771.94

  126. A new approach to software-implemented fault tolerance
    M. Rebaudengo, M. Sonza Reorda, M. Violante
    JOURNAL OF ELECTRONIC TESTING, 2004

  127. Automatic Test Program Generation: a Case Study
    F. Corno, E. Sanchez, M. Sonza Reorda, G. Squillero
    IEEE DESIGN & TEST OF COMPUTERS, 2004
    DOI: 10.1109/MDT.2004.1277902
    ABSTRACT: Editor's note: Comprehensive coverage measurement should guide an effective testbench generation approach. Today, feedback from coverage to test generation often requires manual work; it is desirable to implement a framework that automates this feedback process. The authors propose a genetic-algorithm- based evolution framework for testbench generation. It enables small test programs to evolve and effectively capture target corner cases based on the feedback from coverage measurement

  128. Code Generation for Functional Validation of Pipelined Microprocessors
    F. Corno, E. Sanchez, M. Sonza Reorda, G. Squillero
    JOURNAL OF ELECTRONIC TESTING, 2004

  129. Efficient analysis of single event transients
    M. Sonza Reorda, M. Violante
    JOURNAL OF SYSTEMS ARCHITECTURE, 2004
    DOI: 10.1016/j.sysarc.2003.08.008

  130. Evolutionary Simulation-Based Validation
    F. Corno, M. Sonza Reorda, G. Squillero
    INTERNATIONAL JOURNAL ON ARTIFICIAL INTELLIGENCE TOOLS, 2004

  131. Simulation-Based Analysis of SEU Effects in SRAM-Based FPGAs
    M. Violante, L. Sterpone, M. Ceschia, D. Bortolato, P. Bernardi, M. Sonza Reorda, A. Paccagnella
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2004
    DOI: 10.1109/TNS.2004.839516
    ABSTRACT: SRAM-based field programmable gate arrays (FPGAs) are particularly sensitive to single event upsets (SEUs) that, by changing the FPGA's configuration memory, may affect dramatically the functions implemented by the device. In this paper we describe a new approach for predicting SEU effects in circuits mapped on SRAM-based FPGAs that combines radiation testing with simulation. The former is used to characterize (in terms of device cross section) the technology on which the FPGA device is based, no matter which circuit it implements. The latter is used to predict the probability for a SEU to alter the expect behavior of a given circuit. By combining the two figures, we then compute the cross section of the circuit mapped on the pre-characterized device. Experimental results are presented that compare the approach we developed with a traditional one based on radiation testing only, to measure the cross section of a circuit mapped on an FPGA. The figures here reported confirm the accuracy of our approach

  132. Accurate Analysis of Single Event Upsets in a Pipelined Microprocessor
    M. Rebaudengo, M. Sonza Reorda, M. Violante
    JOURNAL OF ELECTRONIC TESTING, 2003
    DOI: 10.1023/A:1025130131636

  133. Accurate single-event-transient analysis via zero-delay logic simulation
    M. Violante
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2003
    DOI: 10.1109/TNS.2003.820729

  134. Identification and classification of single-event upsets in the configuration memory of sram-based fpgas
    M. Ceschia, M. Violante, M. Sonza Reorda, A. Paccagnella, P. Bernardi, M. Rebaudengo, D. Bortolato, M. Bellato, P. Zambolin, A. Candelori
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2003

  135. Impact of data cache memory on the single event upset-induced error rate of microprocessors
    F. Faure, R. Velazco, M. Rebaudengo, M. Sonza Reorda, M. Violante
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2003
    DOI: 10.1109/TNS.2003.821824

  136. New Techniques for efficiently assessing reliability of SOCs
    P. Civera, L. Macchiarulo, M. Rebaudengo, M. Sonza Reorda, M. Violante
    MICROELECTRONICS JOURNAL, 2003

  137. An FPGA-based approach for speeding-up Fault Injection campaigns on safety-critical circuits
    P. Civera, L. Macchiarulo, M. Rebaudengo, M. Sonza Reorda, M. Violante
    JOURNAL OF ELECTRONIC TESTING, 2002

  138. Coping With SEUs/SETs in Microprocessors by means of Low-Cost Solutions: A Comparative Study
    M. Rebaudengo, M. Sonza Reorda, M. Violante, B. Nicolescu, R. Velazco
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2002

  139. Initializability Analysis of Synchronous Sequential Circuits
    F. Corno, P. Prinetto, M. Rebaudengo, M. Sonza Reorda, G. Squillero
    ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS, 2002
    DOI: 10.1145/544536.544538

  140. Exploiting Circuit Emulation for Fast Hardness Evaluation
    P. Civera, L. Macchiarulo, M. Rebaudengo, M. Sonza Reorda, M. Violante
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2001

  141. Experimentally evaluating an automatic approach for generating safety-critical software with respect to transient errors
    P. Cheynet, B. Nicolescu, R. Velazco, M. Rebaudengo, M. Sonza Reorda, M. Violante
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2000

  142. RT-level ITC'99 benchmarks and first ATPG results
    F. Corno, M. Sonza Reorda, G. Squillero
    IEEE DESIGN & TEST OF COMPUTERS, 2000
    DOI: 10.1109/54.867894
    ABSTRACT: New design flows require reducing work at the gate level and performing most activities before the synthesis step, including evaluation of testability of circuits. We propose a suite of RT-level benchmarks that help improve research in high-level ATPG tools. First results on the benchmarks obtained with our prototype tool show the feasibility of the approach

  143. Fault Injection for Embedded Microprocessor-based Systems
    A. Benso, M. Rebaudengo, M. Sonza Reorda
    JOURNAL OF UNIVERSAL COMPUTER SCIENCE, 1999
    DOI: 10.3217/jucs-005-10-0693
    KEYWORDS: digital system design test and verification; dependability evaluation; fault injection; embedded microprocessor-based systems
    ABSTRACT: Microprocessor-based embedded systems are increasingly used to control safety-critical systems (e.g., air and railway traffic control, nuclear plant control, aircraft and car control). In this case, fault tolerance mechanisms are introduced at the hardware and software level. Debugging and verifying the correct design and implementation of these mechanisms ask for effective environments, and Fault Injection represents a viable solution for their implementation. In this paper we present a Fault Injection environment, named FlexFI, suitable to assess the correctness of the design and implementation of the hardware and software mechanisms existing in embedded microprocessor-based systems, and to compute the fault coverage they provide. The paper describes and analyzes different solutions for implementing the most critical modules, which differ in terms of cost, speed, and intrusiveness in the original system behavior

  144. SymFony: a hybrid topological-symbolic ATPG exploiting RT-level information
    F. Corno, P. Prinetto, M. Sonza Reorda, M. Violante, U. Glaeser, H. Vierhaus
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 1999
    DOI: 10.1109/43.743731

  145. EXFI: a low cost Fault Injection System for embedded Microprocessor-based Boards
    A. Benso, P. Prinetto, M. Rebaudengo, M. Sonza Rreorda
    ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS, 1998
    DOI: 10.1145/296333.296351
    KEYWORDS: performance analysis; digital system design test and verification; embedded microprocessor-based systems; software implemented fault injection; fault injection
    ABSTRACT: Evaluating the faulty behavior of low-cost embedded microprocessor-based boards is an increasingly important issue, due to their adoption in many safety critical systems. The architecture of a complete Fault Injection environment is proposed, integrating a module for generating a collapsed list of faults, and another for performing their injection and gathering the results. To address this issue, the paper describes a software-implemented Fault Injection approach based on the Trace Exception Mode available in most microprocessors. The authors describe EXFI, a prototypical system implementing the approach, and provide data about some sample benchmark applications. The main advantages of EXFI are the low cost, the good portability, and the high efficiency

  146. Integrating On-Line and Off-Line Testing of a Switching Memory in a Telecommunication System
    S. Barbagallo, F. Corno, D. Medina, P. Prinetto, M. Sonza Reorda
    IEEE DESIGN & TEST OF COMPUTERS, 1998

  147. Integrating Online and Offline Testing of a Switching Memory
    S. Barbagallo, F. Corno, D. Medina, P. Prinetto, M. Sonza Reorda
    IEEE DESIGN & TEST OF COMPUTERS, 1998
    DOI: 10.1109/54.655184

  148. The General Product Machine: a New Model for Symbolic FSM Traversal
    G. Cabodi, P. Camurati, F. Corno, P. Prinetto, M. Sonza Reorda
    FORMAL METHODS IN SYSTEM DESIGN, 1998
    ABSTRACT: FORMAL METHODS IN SYSTEM DESIGN, KLUWER

  149. Circular self-test path for FSMs
    F. Corno, P. Prinetto, M. Sonza Reorda
    IEEE DESIGN & TEST OF COMPUTERS, 1996
    DOI: 10.1109/54.544536

  150. GALLO: a Genetic Algorithm for Floorplan Area Optimization
    M. Rebaudengo, M. Sonza Reorda
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 1996
    ABSTRACT: IEEE TRANSACTIONS ON COMPUTER AIDED DESIGN

  151. GATTO: A Genetic Algorithm for Automatic Test Pattern Generation for Large Synchronous Sequential Circuits
    F. Corno, P. Prinetto, M. Rebaudengo, M. Sonza Reorda
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 1996
    DOI: 10.1109/43.511578
    ABSTRACT: IEEE TRANSACTIONS ON COMPUTER AIDED DESIGN

  152. Il ruolo delle tecniche di fault injection nell'analisi dell'affidabilità dei sistemi
    A. Benso, F. Corno, P. Prinetto, M. Rebaudengo, M. Sonza Reorda
    AEI AUTOMAZIONE ENERGIA INFORMAZIONE, 1996
    KEYWORDS: reliability; fault injection; digital system design test and verification
    ABSTRACT: Con il crescente utilizzo dei sistemi informatici in una sempre pi? vasta gamma di applicazioni, diventa di primaria importanza lo studio delle possibili conseguenze che pu?= provocare un malfunzionamento in uno di tali sistemi. In applicazioni critiche, quali il controllo del traffico aereo, di reattori nucleari, di sistemi di telecomunicazioni, di apparecchiature mediche, un guasto al sistema informatico pu?= costare la vita di persone o ingenti perdite economiche. I sistemi informatici impiegati in questo tipo di applicazioni sono solitamente progettati in modo da tollerare i guasti che possono causare gravi malfunzionamenti. Risulta allora necessario identificare metodologie di qualit? che permettano di accertare che un sistema critico per la sicurezza sia stato realizzato in modo corretto e che soddisfi i livelli di affidabilit? richiesti. Una valida soluzione al problema ? offerta dalla fault injection, ossia la deliberata iniezione di guasti all'interno di un sistema, allo scopo di analizzarne le reazioni. Nel primo capitolo di questo articolo viene affrontata l'analisi del problema dell'affidabilit? di un sistema (dependability) e viene descritto il contesto in cui si opera nell'analisi di sistemi "affidabili". Nel secondo capitolo vengono definiti i modelli di guasto e le differenze di tali modelli rispetto ad un approccio orientato al collaudo. Nel terzo capitolo sono descritte le principali tecniche di fault injection presentate in letteratura, evidenziando, per ognuna, l'idea di base, i pregi e i principali difetti

  153. Role of fault injection techniques in system dependability analysis
    A. Benso, F. Corno, P. Prinetto, M. Rebaudengo, M. Sonza Reorda
    AEI AUTOMAZIONE ENERGIA INFORMAZIONE, 1996

  154. Testable Synthesis of Control Units via Circular Self-Test Path: Problems and Solutions
    F. Corno, P. Prinetto, M. Sonza Reorda
    IEEE DESIGN & TEST OF COMPUTERS, 1996

  155. A Parallel System for Test Pattern Generation
    G. Balboni, G. Cabodi, S. Gai, M. Sonza Reorda
    PARALLEL COMPUTING, 1993

  156. An Approach to Sequential Circuit Diagnosis Based on Formal Verification Techniques
    G. Cabodi, P. Camurati, F. Corno, P. Prinetto, M. Sonza Reorda
    JOURNAL OF ELECTRONIC TESTING, 1993

  157. TPDL*: Extended Temporal Profile Description Language
    G. Cabodi, P. Camurati, P. Prinetto, M. Sonza Reorda
    SOFTWARE-PRACTICE & EXPERIENCE, 1991

  158. A Transputer-based gate-level fault simulator
    G. Cabodi, S. Gai, M. Sonza Reorda
    MICROPROCESSING AND MICROPROGRAMMING, 1990

  159. Expressing logical and temporal conditions in simulation environments: TPDL*
    G. Cabodi, P. Camurati, P. Prinetto, M. Sonza Reorda
    MICROPROCESSING AND MICROPROGRAMMING, 1989

  160. C_TPDL* : adapting TPDL* to concurrent simulation environments
    G. Cabodi, P. Camurati, P. Prinetto, M. Sonza Reorda
    MICROPROCESSING AND MICROPROGRAMMING, 1986