1. A cost-effective proposal for an RFID-based system for agri-food traceability
    R. Ferrero, F. Gandino, B. Montrucchio, M. Rebaudengo
    KEYWORDS: agri-food; fruit; traceability; tracking; rfid; supply chain automation; information management; warehouse
    ABSTRACT: Agri-food companies, operating in the packaging, storage and distribution of fruit and vegetables, need to be provided with information systems able to meet the requirements imposed by the current European regulations in terms of traceability. This paper evaluates the benefits and drawbacks of a semi-automated information management tracking system for a warehouse specialized in the fruit market. It is targeted to small and medium-sized companies, with limited financial means for investments and without technical support in their premises. These requirements are met by using a PDA equipped with an RFID reader: the information collected throughout the production process are locally stored in the PDA and occasionally sent to a server. In this way the proposed system does not rely neither on a widespread wireless network, nor on fixed RFID readers, which can increase automation, but need more investment and assistance

  2. A security protocol for RFID traceability
    F. Gandino, B. Montrucchio, M. Rebaudengo
    DOI: 10.1002/dac.3109
    KEYWORDS: rfid; security; public key; supply chain
    ABSTRACT: Nowadays, traceability represents a key activity in many sectors. Many modern traceability systems are based on radio-frequency identification (RFID) technology. However, the distributed information stored on RFID tags involves new security problems. This paper presents the traceability multi-entity cryptography, a high-level data protection scheme based on public key cryptography that is able to protect RFID data for traceability and chain activities. This scheme is able to manage entities with different permissions, and it is especially suitable for applications that require complex Information Systems. Traceability multi-entity cryptography avoids industrial espionage, guarantees the information authenticity, protects the customer privacy, and detects malicious alterations of the information. In contrast to the state-of-the-art RFID security schemes, the proposed protocol is applicable to standard RFID tags without any cryptographic capability, and it does not require a central database

  3. (Over-)Realism in evolutionary computation: Commentary on "On the Mapping of Genotype to Phenotype in Evolutionary Algorithms" by Peter A. Whigham, Grant Dick, and James Maclaurin
    G. Squillero, A. Tonda
    DOI: 10.1007/s10710-017-9295-y
    KEYWORDS: evolutionary computation
    ABSTRACT: Inspiring metaphors play an important role in the beginning of an investigation, but are less important in a mature research field as the real phenomena involved are understood. Nowadays, in evolutionary computation, biological analogies should be taken into consideration only if they deliver significant advantages.

  4. A High-Level Approach to Analyze the Effects of Soft Errors on Lossless Compression Algorithms
    S. Avramenko, M. Sonza Reorda, M. Violante, G. Fey
    DOI: 10.1007/s10836-016-5637-6
    KEYWORDS: high-level fault injection; lossless compression; reliability; soft errors; electrical and electronic engineering
    ABSTRACT: In space applications, the data logging sub-system often requires compression to cope with large amounts of data as well as with limited storage and communication capabilities. The usage of Commercial off-the-Shelf (COTS) hardware components is becoming more common, since they are particularly suitable to meet high performance requirements and also to cut the cost with respect to space qualified ones. On the other side, given the characteristics of the space environment, the usage of COTS components makes radiation-induced soft errors highly probable. The purpose of this work is to analyze a set of lossless compression algorithms in order to compare their robustness against soft errors. The proposed approach works on the unhardened version of the programs, aiming to estimate their intrinsic robustness. The main contribution of the work lies in the investigation of the possibility of performing an early comparison between different compression algorithms at a high level, by only considering their data structures (corresponding to program variables). This approach is virtually agnostic of the downstream implementation details. This means that the proposed approach aims to perform a comparison (in terms of robustness against soft errors) between the considered programs before the final computing platform is defined. The results of the high-level analysis can also be used to collect useful information to optimize the hardening phase. Experimental results based on the OpenRISC processor are reported. They suggest that when properly adopted, the proposed approach makes it possible to perform a comparison between a set of compression algorithms, even with a very limited knowledge of the target computing system

  5. A Key Distribution Scheme for Mobile Wireless Sensor Networks: Q-s-Composite
    F. Gandino, R. Ferrero, M. Rebaudengo
    DOI: 10.1109/TIFS.2016.2601061
    KEYWORDS: safety; risk; reliability and quality; computer networks and communications; key management; random predistribution; wsn
    ABSTRACT: The majority of security systems for wireless sensor networks are based on symmetric encryption. The main open issue for these approaches concerns the establishment of symmetric keys. A promising key distribution technique is the random predistribution of secret keys. Despite its effectiveness, this approach presents considerable memory overheads, in contrast with the limited resources of wireless sensor networks. In this paper, an in-depth analytical study is conducted on the state-of-the-art key distribution schemes based on random predistribution. A new protocol, called q-s-composite, is proposed in order to exploit the best features of random predistribution and to improve it with lower requirements. The main novelties of q-s-composite are represented by the organization of the secret material that allows a storing reduction, by the proposed technique for pairwise key generation, and by the limited number of predistributed keys used in the generation of a pairwise key. A comparative analysis demonstrates that the proposed approach provides a higher level of security than the state-of-the-art schemes.

  6. A Low-Cost Reliability vs. Cost Trade-Off Methodology to Selectively Harden Logic Circuits
    I. Wali, B. Deveautour, A. Virazel, A. Bosio, P. Girard, M. Sonza Reorda
    DOI: 10.1007/s10836-017-5640-6

  7. Evaluation of transient errors in GPGPUs for safety critical applications: An effective simulation-based fault injection environment
    S. Azimi, B. Du, L. Sterpone
    DOI: 10.1016/j.sysarc.2017.01.009
    KEYWORDS: fault injection; fault tolerant system; gpgpu; transient errors; validations; safety critical applications
    ABSTRACT: General Purpose Graphics Processing Units (GPGPUs) are increasingly adopted thanks to their high computational capabilities. GPGPUs are preferable to CPUs for a large range of computationally intensive applications, not necessarily related to computer graphics. Within the high performance computing context, GPGPUs must require a large amount of resources and have plenty execution units. GPGPUs are becoming attractive for safety-critical applications where the phenomenon of transient errors is a major concern. In this paper we propose a novel transient error fault injection simulation methodology for the accurate simulation of GPGPUs applications during the occurrence of transient errors. The developed environment allows to inject transient errors within all the memory area of GPGPUs and into not user-accessible resources such as in streaming processors combinational logic and sequential elements. The capability of the fault injection simulation platform has been evaluated testing three benchmark applications including mitigation approaches such as Duplication With Comparison, Triple Modular Redundancy and Algorithm Based Fault Tolerance. The amount of computational costs and time measured is minimal thus enabling the usage of the developed approach for effective transient errors evaluation.

  8. Microprocessor Testing: Functional Meets Structural Test
    A. Touati, P. Girard, A. Virazel
    DOI: 10.1142/S0218126617400072

  9. On the consolidation of mixed criticalities applications on multicore architectures
    S. Esposito, M. Violante
    DOI: 10.1007/s10836-016-5636-7
    KEYWORDS: mixed criticality; software implemented fault tolerance; hybrid ar-chitecture; multicore systems; real time applications
    ABSTRACT: In this paper we propose a hybrid solution to ensure results correctness when deploying several applications with different safety requirements on a single multi-core-based system. The proposed solution is based on lightweight hardware redundancy, implemented using smart watchdogs and voter logic, combined with software redundancy. Two techniques of software redundancy are used: the first one is software temporal triple modular redundancy, used for those tasks with low crit-icality and no real-time requirement. The second software redundancy technique is triple module redundancy for tasks with high criticality and real-time requirements, assisted by a hardware voter. A hypervisor is used to separate each task in the sys-tem in an independent resource partition, thus ensuring that no functional interfer-ence is occurring. The proposed solution has been evaluated through hardware and software fault injection on two hardware platforms, featuring a dual-core processor and a quad-core processor respectively. Results show a high fault tolerance achieved using the proposed architecture

  10. A Fault-Tolerant Ripple-Carry Adder with Controllable-Polarity Transistors
    H. Mohammadi, P. Gaillardon, J. Zhang, G. Micheli, E. Sanchez, M. Reorda
    DOI: 10.1145/2988234

  11. A Flexible Framework for the Automatic Generation of SBST Programs
    A. Riefert, R. Cantoro, M. Sauer, M. Sonza Reorda, B. Becker
    DOI: 10.1109/TVLSI.2016.2538800
    KEYWORDS: sbst, testing, atpg, sat
    ABSTRACT: Software-based self-test (SBST) techniques are used to test processors and processor cores against permanent faults introduced by the manufacturing process or to perform in-field test in safety-critical applications. However, the generation of an SBST program is usually associated with high costs as it requires significant manual effort of a skilled engineer with in-depth knowledge about the processor under test. In this paper, we propose an approach for the automatic generation of SBST programs. First, we detail an automatic test pattern generation (ATPG) framework for the generation of functional test sequences. Second, we describe the extension of this framework with the concept of a validity checker module (VCM), which allows the specification of constraints with regard to the generated sequences. Third, we use the VCM to express typical constraints that exist when SBST is adopted for in-field test. In our experimental results, we evaluate the proposed approach with a microprocessor without interlocked pipeline stages (MIPS)-like microprocessor. The results show that the proposed method is the first approach able to automatically generate SBST programs for both end-of-manufacturing and in-field test whose fault efficiency is superior to those produced by state-of-the-art manual approaches

  12. A Hybrid Fault-Tolerant Architecture for Highly Reliable Processing Cores
    I. Wali, A. Virazel, A. Bosio, P. Girard, S. Pravossoudovitch, M. Sonza Reorda
    DOI: 10.1007/s10836-016-5578-0
    KEYWORDS: fault tolerance, microprocessor, single event transient, permanent fault, delay fault, power consumption, high dependability, fault injection
    ABSTRACT: Increasing vulnerability of transistors and interconnects due to scaling is continuously challenging the reliability of future microprocessors. Lifetime reliability is gaining attention over performance as a design factor even for lower-end commodity applications. In this work we present a low-power Hybrid fault tolerant architecture for reliability improvement of pipelined microprocessors by protecting their combinational logic parts. The architecture can handle a broad spectrum of faults with little impact on performance by combining different types of redundancies. Moreover, it addresses the problem of error propagation behavior of nonlinear pipelines and error detection in pipeline stages with memory interfaces. Our case-study implementation of fault tolerant MIPS microprocessors highlights four main advantages of the proposed solution. It offers (i) 11.6% power saving, (ii) improved transient error detection capability, (iii) lifetime reliability improvement, and (iv) better fault accumulation effect handling, in comparison with TMR architectures. We also present a gate-level fault-injection framework that offers high fidelity to model physical defects and transient faults

  13. A mobile and low-cost system for environmental monitoring: a case study
    A. Velasco, R. Ferrero, F. Gandino, B. Montrucchio, M. Rebaudengo
    SENSORS, 2016
    DOI: 10.3390/s16050710
    KEYWORDS: air monitoring; air pollution; wireless sensor networks; mobile sensors
    ABSTRACT: Northern Italy has one of the highest air pollution levels in the European Union. This paper describes a mobile wireless sensor network system intended to complement the already existing official air-quality monitoring systems of the metropolitan town of Torino. The system is characterized by a high portability and low cost, in both acquisition and maintenance. The high portability of the system aims to improve the spatial distribution and resolution of the measurements from the official static monitoring stations. Commercial PM10 and O3 sensors were incorporated into the system, and were subsequently tested in a controlled environment and on the field. Test on the field, performed in collaboration with the local environmental agency, revealed that the sensors can provide accurate data if properly calibrated and maintained. Further tests were carried out by mounting the system on bicycles in order to increase their mobility.

  14. An Error-Detection and Self-Repairing Method for Dynamically and Partially Reconfigurable Systems
    M. Sonza Reorda, L. Sterpone, A. Ullah
    DOI: 10.1109/TC.2016.2607749
    KEYWORDS: transient errors; performances; self-repairing; fpga; dynamic reconfiguration; partial reconfiguration
    ABSTRACT: Reconfigurable systems are gaining an increasing interest in the domain of safety-critical applications, for example in the space and avionic domains. In fact, the capability of reconfiguring the system during run-time execution and the high computational power of modern Field Programmable Gate Arrays (FPGAs) make these devices suitable for intensive data processing tasks. Moreover, such systems must also guarantee the abilities of self-awareness, self-diagnosis and self-repair in order to cope with errors due to the harsh conditions typically existing in some environments. In this paper we propose a selfrepairing method for partially and dynamically reconfigurable systems applied at a fine-grain granularity level. Our method is able to detect, correct and recover errors using the run-time capabilities offered by modern SRAM-based FPGAs. Fault injection campaigns have been executed on a dynamically reconfigurable system embedding a number of benchmark circuits. Experimental results demonstrate that our method achieves full detection of single and multiple errors, while significantly improving the system availability with respect to traditional error detection and correction methods.

  15. Anatomy of a portfolio optimizer under a limited budget constraint
    I. Deplano, G. Squillero, A. Tonda
    DOI: 10.1007/s12065-016-0144-3
    KEYWORDS: portfolio optimizationmulti-layer perceptronmulti-objective optimizationfinancial forecasting
    ABSTRACT: Predicting the market's behavior to profit from trading stocks is far from trivial. Such a task becomes even harder when investors do not have large amounts of money available, and thus cannot influence this complex system in any way. Machine learning paradigms have been already applied to financial forecasting, but usually with no restrictions on the size of the investor's budget. In this paper, we analyze an evolutionary portfolio optimizer for the management of limited budgets, dissecting each part of the framework, discussing in detail the issues and the motivations that led to the final choices. Expected returns are modeled resorting to artificial neural networks trained on past market data, and the portfolio composition is chosen by approximating the solution to a multi-objective constrained problem. An investment simulator is eventually used to measure the portfolio performance. The proposed approach is tested on real-world data from New York's, Milan's and Paris' stock exchanges, exploiting data from June 2011 to May 2014 to train the framework, and data from June 2014 to July 2015 to validate it. Experimental results demonstrate that the presented tool is able to obtain a more than satisfying profit for the considered time frame

  16. Divergence of character and premature convergence: A survey of methodologies for promoting diversity in evolutionary optimization
    G. Squillero, A. Tonda
    DOI: 10.1016/j.ins.2015.09.056
    KEYWORDS: diversity preservation; evolutionary optimization
    ABSTRACT: In the past decades, different evolutionary optimization methodologies have been proposed by scholars and exploited by practitioners, in a wide range of applications. Each paradigm shows distinctive features, typical advantages, and characteristic disadvantages; however, one single problem is shared by almost all of them: the "lack of speciation". While natural selection favors variations toward greater divergence, in artificial evolution candidate solutions do homologize. Many authors argued that promoting diversity would be beneficial in evolutionary optimization processes, and that it could help avoiding premature convergence on sub-optimal solutions. The paper surveys the research in this area up to mid 2010s, it re-orders and re-interprets different methodologies into a single framework, and proposes a novel three-axis taxonomy. Its goal is to provide the reader with a unifying view of the many contributions in this important corpus, allowing comparisons and informed choices. Characteristics of the different techniques are discussed, and similarities are highlighted; practical ways to measure and promote diversity are also suggested

  17. Exploiting Evolutionary Modeling to Prevail in Iterated Prisoner's Dilemma Tournaments
    M. Gaudesi, E. Piccolo, G. Squillero, A. Tonda
    DOI: 10.1109/TCIAIG.2015.2439061
    KEYWORDS: games, sociology, statistics, computational modeling, mathematical model, adaptation models, game theory, opponent modeling, evolutionary algorithms, iterated prisoner's dilemma, non-deterministic finite state machine
    ABSTRACT: The iterated prisoner's dilemma is a famous model of cooperation and conflict in game theory. Its origin can be traced back to the Cold War, and countless strategies for playing it have been proposed so far, either designed by hand or automatically generated by computers. In the 2000s, scholars started focusing on adaptive players, that is, able to classify their opponent's behavior and adopt an effective counter-strategy. The player presented in this paper, pushes such idea even further: it builds a model of the current adversary from scratch, without relying on any pre-defined archetypes, and tweaks it as the game develops using an evolutionary algorithm; at the same time, it exploits the model to lead the game into the most favorable continuation. Models are compact non-deterministic finite state machines; they are extremely efficient in predicting opponents' replies, without being completely correct by necessity. Experimental results show that such player is able to win several one-to- one games against strong opponents taken from the literature, and that it consistently prevails in round-robin tournaments of different sizes

  18. Fast Hierarchical Key Management Scheme with Transitory Master Key for Wireless Sensor Networks
    F. Gandino, R. Ferrero, B. Montrucchio, M. Rebaudengo
    DOI: 10.1109/JIOT.2016.2599641
    KEYWORDS: information systems; hardware and architecture; computer science applications1707 computer vision and pattern recognition; computer networks and communications; information systems and management; key management; symmetric encryption; transitory master key (mk); wireless sensor network (wsn); signal processing
    ABSTRACT: Symmetric encryption is the most widely adopted security solution for wireless sensor networks. The main open issue in this context is represented by the establishment of symmetric keys. Although many key management schemes have been proposed in order to guarantee a high security level, a solution without weaknesses does not yet exist. An important class of key management schemes is based on a transitory master key (MK). In this approach, a global secret is used during the initialization phase to generate pair-wise keys, and it is deleted during the working phase. However, if an adversary compromises a node before the deletion of the MK, the security of the whole network is compromised. In this paper, a new key negotiation routine is proposed. The new routine is integrated with a well-known key computation mechanism based on a transitory master secret. The goal of the proposed approach is to reduce the time required for the initialization phase, thus reducing the probability that the master secret is compromised. This goal is achieved by splitting the initialization phase in hierarchical subphases with an increasing level of security. An experimental analysis demonstrates that the proposed scheme provides a significant reduction in the time required before deleting the transitory secret material, thus increasing the overall security level. Moreover, the proposed scheme allows to add new nodes after the first deployment with a suited routine able to complete the key establishment in the same time as for the initial deployment.

  19. Identification and Rejuvenation of NBTI-Critical Logic Paths in Nanoscale Circuits
    M. Jenihhin, G. Squillero, T.S. Copetti, V. Tihhomirov, S. Kostin, M. Gaudesi, F. Vargas, J. Raik, M. Sonza Reorda, L.B. Poehls, R. Ubar, G.C. Medeiros
    DOI: 10.1007/s10836-016-5589-x
    KEYWORDS: hardware rejuvenation, aging, nbti, critical path identification, logic circuit, evolutionary computation, microgp, zamiacad
    ABSTRACT: The Negative Bias Temperature Instability (NBTI) phenomenon is agreed to be one of the main reliability concerns in nanoscale circuits. It increases the threshold voltage of pMOS transistors, thus, slows down signal propagation along logic paths between flip-flops. NBTI may cause intermittent faults and, ultimately, the circuit's permanent functional failures. In this paper, we propose an innovative NBTI mitigation approach by rejuvenating the nanoscale logic along NBTI-critical paths. The method is based on hierarchical identification of NBTI-critical paths and the generation of rejuvenation stimuli using an Evolutionary Algorithm. A new, fast, yet accurate model for computation of NBTI-induced delays at gate-level is developed. This model is based on intensive SPICE simulations of individual gates. The generated rejuvenation stimuli are used to drive those pMOS transistors to the recovery phase, which are the most critical for the NBTI-induced path delay. It is intended to apply the rejuvenation procedure to the circuit, as an execution overhead, periodically. Experimental results performed on a set of designs demonstrate reduction of NBTI-induced delays by up to two times with an execution overhead of 0.1 % or less. The proposed approach is aimed at extending the reliable lifetime of nanoelectronics

  20. Investigation of interference models for RFID systems
    L. Zhang, R. Ferrero, F. Gandino, M. Rebaudengo
    SENSORS, 2016
    DOI: 10.3390/s16020199
    KEYWORDS: rfid; interference model; reader-to-reader collision
    ABSTRACT: The reader-to-reader collision in an RFID system is a challenging problem for communications technology. In order to model the interference between RFID readers, different interference models have been proposed, mainly based on two approaches: single and additive interference. The former only considers the interference from one reader within a certain range, whereas the latter takes into account the sum of all of the simultaneous interferences in order to emulate a more realistic behavior. Although the difference between the two approaches has been theoretically analyzed in previous research, their effects on the estimated performance of the reader-to-reader anti-collision protocols have not yet been investigated. In this paper, the influence of the interference model on the anti-collision protocols is studied by simulating a representative state-of-the-art protocol. The results presented in this paper highlight that the use of additive models, although more computationally intensive, is mandatory to improve the performance of anti-collision protocols

  21. KITO tool: A fault injection environment in Linux kernel data structures
    A. Velasco, B. Montrucchio, M. Rebaudengo
    DOI: 10.1016/j.microrel.2016.02.011
    KEYWORDS: fault injection; linux kernel; operating system dependability; safety-critical applications
    ABSTRACT: Transient faults in safety-critical computer-based systems represent a major issue for guaranteeing correct system behaviour. Fault injection is a commonly used method to evaluate the sensitivity of such systems. This paper presents a fault injection tool, called KITO, to evaluate the effects of faults in memory containing data structures belonging to a Unix-based Operating System and, in particular, elements linked to resource synchronization management. An experimental analysis was conducted on a large set of memory elements of the Operating System itself, while the system was subject to stress from benchmark programs that use different elements of the Linux kernel. Experimental results show that synchronization aspects of the kernel are susceptible to a significant set of possible errors ranging from performance degradation to failure in successfully completing the benchmark application.

  22. New Techniques to Reduce the Execution Time of Functional Test Programs
    M. Gaudesi, I. Pomeranz, M. Sonza Reorda, G. Squillero
    DOI: 10.1109/TC.2016.2643663
    KEYWORDS: test compaction; test generation; test program; software-based self-test
    ABSTRACT: The compaction of test programs for processor-based systems is of utmost practical importance: Software-Based Self-Test (SBST) is nowadays increasingly adopted, especially for in-field test of safety-critical applications, and both the size and the execution time of the test are critical parameters. However, while compacting the size of binary test sequences has been thoroughly studied over the years, the reduction of the execution time of test programs is still a rather unexplored area of research. This paper describes a family of algorithms able to automatically enhance an existing test program, reducing the time required to run it and, as a side effect, its size. The proposed solutions are based on instruction removal and restoration, which is shown to be computationally more efficient than instruction removal alone. Experimental results demonstrate the compaction capabilities, and allow analyzing computational costs and effectiveness of the different algorithms.

  23. OLT(RE)2: an On-Line on-demand Testing approach for permanent Radiation Effects in REconfigurable systems
    D. Cozzi, S. Korf, L. Cassano, J. Hagemeyer, A. Domenici, C. Bernardeschi, M. Porrmann, L. Sterpone
    DOI: 10.1109/TETC.2016.2586195
    KEYWORDS: permanent faults; on-line testing; reconfiguration; fpga; radiation effects
    ABSTRACT: Reconfigurable systems gained great interest in a wide range of application fields, including aerospace, where electronic devices are exposed to a very harsh working environment. Commercial SRAM-based FPGA devices represent an extremely interesting hardware platform for this kind of systems since they combine low cost with the possibility to utilize state-of-the-art processing power as well as the flexibility of reconfigurable hardware. In this paper we present OLT(RE)2: an on-line on-demand approach to test permanent faults induced by radiation in reconfigurable systems used in space missions. The proposed approach relies on a test circuit and on custom place-and-route algorithms. OLT(RE)2 exploits partial dynamic reconfigurability offered by today's SRAM-based FPGAs to place the test circuits at run-time. The goal of OLT(RE)2 is to test unprogrammed areas of the FPGA before using them, thus preventing functional modules of the reconfigurable system to be placed on areas with faulty resources. Experimental results have shown that (i) it is possible to generate, place and route the test circuits needed to detect on average more than 99 % of the physical wires and on average about 97 % of the programmable interconnection points of an arbitrary large region of the FPGA in a reasonable time and that (ii) it is possible to download and run the whole test suite on the target device without interfering with the normal functioning of the system.

  24. Observability solutions for in-field functional test of processor-based systems: a survey and quantitative test case evaluation
    J. Perez Acle, R. Cantoro, E. Sanchez, M. Sonza Reorda, G. Squillero
    DOI: 10.1016/j.micpro.2016.09.002
    KEYWORDS: functional test; software-based self-test; performance counters; fault simulation; observability
    ABSTRACT: The usage of electronic systems in safety-critical applications requires mechanisms for the early detection of faults affecting the hardware while the system is in the field. When the system includes a processor, one approach is to make use of functional test programs that are run by the processor itself. Such programs exercise the different parts of the system, and eventually expose the difference between a fully functional system and a faulty one. Their effectiveness depends, among other factors, on the mechanism adopted to observe the behavior of the system, which in turn is deeply affected by the constraints imposed by the application environment. This paper describes different mechanisms for supporting the observation of fault effects during such in-field functional test, and it reports and discusses the results of an experimental analysis performed on some representative case studies, which allow drawing some general conclusions. The gathered results allow the quantitative evaluation of the drop in fault coverage coming from the adoption of the alternative approaches with respect to the ideal case in which all the outputs can be continuously monitored, which is the typical scenario for test generation. The reader can thus better evaluate the advantages and disadvantages provided by each approach. As a major contribution, the paper shows that in the worst case the drop can be significant, while it can be minimized (without introducing any significant extra cost in terms of test generation and duration) through the adoption of a suitable observation mechanism, e.g., using Performance Counters possibly existing in the system. Suitable techniques to implement fault simulation campaigns to assess the effectiveness of different observation mechanisms are also described

  25. On the prediction of Radiation-induced SETs in Flash-based FPGAs
    S. Azimi, B. Du, L. Sterpone
    KEYWORDS: fpgas, sets, seus, radiation effects
    ABSTRACT: The present work proposes a methodology to predict radiation-induced Single Event Transient (SET) phenomena within the silicon structure of Flash-based FPGA devices. The method is based on a MonteCarlo analysis, which allows to calculate the effective duration and amplitude of the SET once generated by the radiation strike. The method allows to effectively characterize the sensitivity of a circuit against the transient effect phenomenon. Experimental results provide a comparison between different radiation tests data, performed with different Linear Energy Transfer (LET) and the respective sensitiveness of SETs

  26. Online Test of Control Flow Errors: A New Debug Interface-Based Approach
    D. Boyang, M. Sonza Reorda, L. Sterpone, L. Parra, M. Portela-García, A. Lindoso, L. Entrena
    DOI: 10.1109/TC.2015.2456014
    KEYWORDS: online test, control flow checking
    ABSTRACT: Detecting the effects of transient faults is a key point in many processor-based safety-critical applications. This paper proposes to adopt the debug interface module existing today in several processors/controllers available on the market. In this way, we can achieve a good detection capability and small latency with respect to control flow errors, while the cost for adopting the proposed technique is rather limited and does not involve any change either in the processor hardware or in the application software. The method works even if the processor uses caches and we experimentally evaluated its characteristics demonstrating the advantages and showing the limitations on two pipelined processors. Experimental results performed by fault injection using different software applications demonstrate that the method is able to archieve high fault coverage (more than 95 percent in nearly all the considered cases) with a limited cost in terms of area and performance degradation

  27. Optimizing groups of colluding strong attackers in mobile urban communication networks with evolutionary algorithms
    D. Bucur, G. Iacca, M. Gaudesi, G. Squillero, A. Tonda
    DOI: 10.1016/j.asoc.2015.11.024
    KEYWORDS: routing; network security; evolutionary algorithms; delay-tolerant network; cooperative co-evolution
    ABSTRACT: In novel forms of the Social Internet of Things, any mobile user within communication range may help routing messages for another user in the network. The resulting message delivery rate depends both on the users' mobility patterns and the message load in the network. This new type of configuration, however, poses new challenges to security, amongst them, assessing the effect that a group of colluding malicious participants can have on the global message delivery rate in such a network is far from trivial. In this work, after modeling such a question as an optimization problem, we are able to find quite interesting results by coupling a network simulator with an evolutionary algorithm. The chosen algorithm is specifically designed to solve problems whose solutions can be decomposed into parts sharing the same structure. We demonstrate the effectiveness of the proposed approach on two medium-sized Delay-Tolerant Networks, realistically simulated in the urban contexts of two cities with very different route topology: Venice and San Francisco. In all experiments, our methodology produces attack patterns that greatly lower network performance with respect to previous studies on the subject, as the evolutionary core is able to exploit the specific weaknesses of each target configuration

  28. Scan-Chain Intra-Cell Aware Testing
    A. Touati, A. Bosio, P. Girard, A. Virazel, P. Bernardi, M. Sonza Reorda, E. Auvray
    DOI: 10.1109/TETC.2016.2624311
    ABSTRACT: This paper first presents an evaluation of the effectiveness of different test pattern sets in terms of ability to detect possible intra-cell defects affecting the scan flip-flops. The analysis is then used to develop an effective test solution to improve the overall test quality. As a major result, the paper demonstrates that by combining test vectors generated by a commercial ATPG to detect stuck-at and delay faults, plus a fragment of extra test patterns generated to specifically target the escaped defects, we can obtain a higher intra-cell defect coverage (i.e., 6.46% on average) and a shorter test time (i.e., 42.20% on average) than by straightforwardly using an ATPG which directly targets these defects.

  29. Toner savings based on quasi-random sequences and a perceptual study for green printing
    M. Bartolomeo, F. Renato
    DOI: 10.1109/TIP.2016.2552641
    KEYWORDS: green computing, toner savings, sobol' sequence
    ABSTRACT: Toner savings in monochromatic printing are an important target for improving green computing performance and more specifically green printing. In order to extend the lifetime of the printer cartridge some options are available for laser printers, usually reducing the number of dots with respect to the normal print quality. However available algorithms and patents do not provide a method for dynamically adapting the percentage of toner savings to the required printing quality. In this paper, we introduce a new quasi-random sequence based algorithm for reducing the number of dots in the printing process, able to achieve optimal discrepancy and low computational complexity, for all print quality levels. In order to reduce patterns in the removed dots, blue noise dithering is applied when the desired percentage of toner savings is moderate. The proposed solution can be easily implemented in the printer firmware, given its low computational complexity. In order to verify the results from a perceptual point of view, an extended test with 135 volunteers and more than 5000 comparisons has been performed, besides checking that toner is effectively saved. Results show that the proposed approach can produce a reduction of the perceived quality almost directly proportional to the number of monochromatic dots skipped, with only a reduced influence from the font used. The perceptual results are better in the proposal than in previous approaches. The proposed algorithm appears to be a promising technique for improving green printing in monochromatic laser printers without using custom fonts

  30. UA2TPG: An untestability analyzer and test pattern generator for SEUs in the configuration memory of SRAM-based FPGAs
    C. Bernardeschi, L. Cassano, A. Domenici, L. Sterpone
    DOI: 10.1016/j.vlsi.2016.03.004
    KEYWORDS: single event upset; sram-based fpga; untestability analysis; model checking
    ABSTRACT: This paper presents UA2TPG, a static analysis tool for the untestability proof and automatic test pattern generation for SEUs in the configuration memory of SRAM-based FPGA systems. The tool is based on the model-checking verification technique. An accurate fault model for both logic components and routing structures is adopted. Experimental results show that many circuits have a significant number of untestable faults, and their detection enables more efficient test pattern generation and on-line testing. The tool is mainly intended to support on-line testing of critical components in FPGA fault-tolerant systems

  31. COTS-Based High-Performance Computing for Space Applications
    S. Esposito, C. Albanese, M. Alderighi, F. Casini, L. Giganti, M. Esposti, C. Monteleone, M. Violante
    DOI: 10.1109/TNS.2015.2492824
    KEYWORDS: commercial-off-the-shelf (cots); space applications; software implemented fault tolerance (sift); watchdog timer; watchdog processor; memory protection; memory encoding; fault injection; single event upset (seu); central processing unit (cpu)
    ABSTRACT: Commercial-off-the-shelf devices are often advocated as the only solution to the increasing performance requirements in space applications. This paper presents the solutions developed in the frame of the European Space Agencys HiRel program, concluded in December 2014, where a number of techniques proposed in the past 10 years have been used to design a highly reliable system, which has been selected for forthcoming space missions. This paper presents the system architecture, describes performed evaluations, and discusses the results

  32. Designing Autonomous Race Car Models for Learning Advanced Topics in Hard Real-Time System
    E. Bagalini, M. Violante
    DOI: 10.4018/IJRAT.2015010101
    KEYWORDS: closed-loop control, computer vision, embedded programming, the freescale cup, hard real-time, intelligent vehicle, model identification, smart car, autonomous vehicles
    ABSTRACT: The purpose of this chapter is to illustrate the design challenge that teams of graduate students are facing, and the solutions they devised, to develop autonomous vehicles for the worldwide speed race competition known as The Freescale Cup. The purpose of the competition is to design the fastest car able to complete an unknown racetrack autonomously. The challenge posed by the competition is an ideal teaching ground for engineering students that can improve with a working experience the theoretical aspects learned in class in different fields such as hard-real-time system theory, control theory, model-based design theory, and computer vision theory. Besides the technical aspects, as cars are developed by teams of students, the competition is also supporting the development of soft skills (such as team work, and communication skills) that are not normally addressed by classical curricula. The results achieved after three years of participation suggest that this approach is very effective in improving the quality of engineering curricula

  33. Development Flow for On-Line Core Self-Test of Automotive Microcontrollers
    P. Bernardi, R. Cantoro, S. De Luca, E. Sanchez, A. Sansonetti
    DOI: 10.1109/TC.2015.2498546
    KEYWORDS: microprocessors and microcomputers, reliability and testing, software-based self-test
    ABSTRACT: Software-Based Self-Test is an effective methodology for devising the on-line testing of Systems-on-Chip. In the automotive field, a set of test programs to be run during mission mode is also called Core Self-Test library. This paper introduces many new contributions: (1) it illustrates the several issues that need to be taken into account when generating test programs for on-line execution; (2) it proposed an overall development flow based on ordered generation of test programs that is minimizing the computational efforts; (3) it is providing guidelines for allowing the coexistence of the Core Self-Test library with the mission application while guaranteeing execution robustness. The proposed methodology has been experimented on a large industrial case study. The coverage level reached after one year of team work is over 87% of stuck-at fault coverage, and execution time is compliant with the ISO26262 specification. Experimental results suggest that alternative approaches may request excessive evaluation time thus making the generation flow unfeasible for large designs

  34. Layout and Radiation Tolerance Issues in High-Speed Links
    R. Giordano, A. Aloisio, V. Bocci, M. Capodiferro, V. Izzo, L. Sterpone, M. Violante
    DOI: 10.1109/TNS.2015.2498307

  35. On the Functional Test of Branch Prediction Units
    E. Sanchez, M. Sonza Reorda
    DOI: 10.1109/TVLSI.2014.2356612
    KEYWORDS: sbst; functional test; branch history table; branch prediction unit
    ABSTRACT: Branch prediction units (BPUs) are highly efficient modules that can significantly decrease the negative impact of branches in pipelined processors. Traditional test solutions, mainly based on Design for Testability techniques, are often inadequate to tackle specific test constraints, such as those found when incoming inspection or online test is considered. Following a functional approach based on running a suitable test program and checking the processor behavior may represent an alternative solution, provided that an effective test algorithm is available for the target unit. In this paper, a functional approach targeting the test of the BPU memory is proposed, which leads to the generation of suitable test programs whose effectiveness is independent of the specific implementation of the BPU. Two very common BPU architectures (branch history table and branch target buffer) are considered. The effectiveness of the approach is validated resorting to an open-source computer architectural simulator. Experimental results show that the proposed method is able to thoroughly test the BPU memory, allowing to transform whichever March algorithm into a corresponding test program; we also provide both theoretical and experimental proofs that the memory and execution time requirements grow linearly with the BPU size

  36. On-line Test of Control Flow Errors: A new Debug Interface-based approach
    B. Du, M. Sonza Reorda, L. Sterpone, L. Parra, M. Portela-Garcia, A. Lindoso, L. Entrena
    DOI: 10.1109/TC.2015.2456014

  37. Radiation-induced single event transients modeling and testing on nanometric flash-based technologies
    L. Sterpone,, B. Du, S. Azimi
    DOI: 10.1016/j.microrel.2015.07.035
    KEYWORDS: nanometric; fpgas; radiation; heavy ions; fault tolerance; reliability
    ABSTRACT: The increasing technology node scaling makes VLSI devices extremely vulnerable to Single Event Effects (SEEs) induced by highly charged particles such as heavy ions, increasing the sensitivity to Single Event Transients (SETs). In this paper, we describe a new methodology combining an analytical and oriented model for analyzing the sensitivity of SET nanometric technologies. The paper includes radiation test experiments performed on Flash-based FPGAs using heavy ions radiation beam. Experimental results are detailed and commented demonstrating the effective mitigation capabilities thanks to the adoption of the developed model

  38. Using Benchmarks for Radiation Testing of Microprocessors and FPGAs
    W. Robinson, P. Rech, M. Aguirre, A. Barnard, M. Desogus, L. Entrena, M. Garcia-Valderas, S. Guertin, D. Kaeli, F. Kastensmidt, B. Kiddie, A. Sanchez-Clemente, M. Sonza Reorda, L. Sterpone, M. Wirthlin
    DOI: 10.1109/TNS.2015.2498313
    KEYWORDS: radiation testing, benchmarks, microprocessors, fpgas, fault tolerance
    ABSTRACT: Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. We describe the development process and report neutron test data for the hardware and software benchmarks

  39. Using Benchmarks for Radiation Testing of Microprocessors and FPGAs
    H. Quinn, W. Robinson, P. Rech, M. Aguirre, A. Barnard, M. Desogus, L. Entrena, M. Garcia-Valderas, S. Guertin, D. Kaeli, F. Kastensmidt, B. Kiddie, A. Sanchez-Clemente, M. Sonza Reorda, L. Sterpone, M. Wirthlin
    DOI: 10.1109/TNS.2015.2498313
    KEYWORDS: nuclear and high energy physics; nuclear energy and engineering; electrical and electronic engineering; software fault tolerance; soft errors; soft error rates; field-programmable gate arrays (fpgas)

  40. A Functional Approach for Testing the Reorder Buffer Memory
    S. Di Carlo, M. Gaudesi, E. Sanchez, M. Sonza Reorda
    DOI: 10.1007/s10836-014-5461-9
    KEYWORDS: digital system design test and verification; microprocessor testing; software-based self-test; embedded memory test; on-line test
    ABSTRACT: Superscalar processors may have the ability to execute instructions out-of-order to better exploit the internal hardware and to maximize the performance. To maintain the in-order instructions commitment and to guarantee the correctness of the final results (as well as precise exception management), the Reorder Buffer (ROB) is used. From the architectural point of view, the ROB is a memory array of several thousands of bits that must be tested against hardware faults to ensure a correct behavior of the processor. Since it is deeply embedded within the microprocessor circuitry, the most straightforward approach to test the ROB is through Built-In Self-Test solutions, which are typically adopted by manufacturers for end-of-production test. However, these solutions may not always be used for the test during the operational phase (in-field test) which aims at detecting possible hardware faults arising when the electronic systems works in its target environment. In fact, these solutions require the usage of test infrastructures that may not be accessible and/or documented, or simply not usable during the operational phase. This paper proposes an alternative solution, based on a functional approach, in which the test is performed by forcing the processor to execute a specially written test program, and checking the behavior of the processor. This approach can be adopted for in-field test, e.g., at the power-on, power-off, or during the time slots unused by the system application. The method has been validated resorting to both an architectural and a memory fault simulator

  41. A New Hybrid Nonintrusive Error-Detection Technique Using Dual Control-Flow Monitoring
    L. Parra, A. Lindoso, M. Portela-Garcia, L. Entrena, B. Du, M. Sonza Reorda, L. Sterpone
    DOI: 10.1109/TNS.2014.2361953
    KEYWORDS: error detection; fault tolerance; control flow error detection; fault injection

  42. ASSESS: A Simulator of Soft Errors in the Configuration Memory of SRAM-based FPGAs
    C. Bernardeschi, L. Cassano, A. Domenici, L. Sterpone
    DOI: 10.1109/TCAD.2014.2329419
    KEYWORDS: simulations; fpgas; radiation; place and route
    ABSTRACT: In this paper a simulator of soft errors (SEUs) in the configuration memory of SRAM-based FPGAs is presented. The simulator, named ASSESS, adopts fault models for SEUs affecting the configuration bits controlling both logic and routing resources that have been demonstrated to be much more accurate than classical fault models adopted by academic and industrial fault simulators currently available. The simulator permits the propagation of faulty values to be traced in the circuit, thus allowing the analysis of the faulty circuit not only by observing its output, but also by studying fault activation and error propagation. ASSESS has been applied to several designs, including the miniMIPS microprocessor, chosen as a realistic test case to evaluate the capabilities of the simulator. The ASSESS simulations have been validated comparing their results with a fault injection campaign on circuits from the ITC'99 benchmark, resulting in an average error of only 0.1%

  43. Evaluating the radiation sensitivity of GPGPU caches: New algorithms and experimental results
    D. Sabena, M. Sonza Reorda, L. Sterpone, P. Rech, L. Carro
    DOI: 10.1016/j.microrel.2014.05.001
    KEYWORDS: null frames; gpgpu; seu; cache; radiation testing
    ABSTRACT: Given their high computational power, General Purpose Graphics Processing Units (GPGPUs) are increasingly adopted: GPGPUs have begun to be preferred to CPUs for several computationally intensive applications, not necessarily related to computer graphics. However, their sensitivity to radiation still requires to be fully evaluated. In this context, GPGPU data caches and shared memory have a key role since they allow to increase performance by sharing data between the parallel resources of a GPGPU and minimizing the memory accesses overhead. In this paper we present three new algorithms designed to support radiation experiments aimed at evaluating the radiation sensitivity of GPGPU data caches and shared memory. We also report the cross-section and Failure In Time results from neutron testing experiments performed on a commercial-off-the-shelf GPGPU using the proposed algorithms, with particular emphasis on the shared memory and on the L1 and L2 data caches

  44. Improving Colorwave with the probabilistic approach for reader-to-reader anti-collision TDMA protocols
    R. Ferrero, F. Gandino, B. Montrucchio, M. Rebaudengo
    DOI: 10.1007/s11276-013-0611-z
    KEYWORDS: rfid; reader-to-reader interference; probabilistic collision resolution; colorwave
    ABSTRACT: In RFID systems, wireless communication among readers and tags is subject to electromagnetic interference. In particular, when several readers work closely, forming so-called Dense Reader Environment (DRE), reader-to-reader collisions may occur. Several anti-collision protocols have been proposed in the literature to address this issue. Distributed Color Selection (DCS) and Colorwave are two effective state-of-the-art protocols, based on Time Division Multiple Access (TDMA). DCS provides great fairness, but it is not adaptable to changes in network topology, penalizing the throughput of the network. Colorwave is an enhanced version of DCS offering more flexibility. Moreover, a general probabilistic approach has been suggested for solving collisions in TDMA protocols and, in particular, it has been applied to DCS. In this work, the probabilistic method is implemented in the collision resolution routine of Colorwave and its effects are analyzed, confirming the validity of this mechanism for TDMA protocols. As proved by simulation results, the probabilistic approach can be adopted to improve throughput or fairness, without adding any other requirement

  45. In- and out-degree distributions of nodes and coverage in random sector graphs
    R. Ferrero, M. Bueno-Delgado, F. Gandino
    DOI: 10.1109/TWC.2014.031314.130905
    KEYWORDS: wireless sensor network; directional antenna; optical sensor network; connectivity; topology
    ABSTRACT: In a random sector graph, the presence of an edge between two nodes depends on their distance and spatial orientation. This kind of graph is widely used for modeling wireless sensor networks where communication among nodes is directional. In particular, it is applied to describe both the radio frequency transmission among nodes equipped with directional antennas and the line-of-sight transmission in optical sensor networks. Important properties of a wireless sensor network, such as connectivity and coverage, can be investigated by studying the degree of the nodes of the corresponding random sector graph. In detail, the in-degree value represents the number of incoming edges, whereas the out-degree considers the outgoing edges. This paper mathematically characterizes the average degree of a random sector graph and the probability distributions of the in-degree and out-degree of the nodes. Furthermore, it derives the coverage probability of the network. All the formulas are validated through extensive simulations, showing an excellent match between theoretical results and experimental data

  46. Increasing the Fault Coverage of Processor Devices during the Operational Phase Functional Test
    M. De Carvalho, P. Bernardi, E. Sanchez, M. Sonza Reorda, O. Ballan
    DOI: 10.1007/s10836-014-5457-5

  47. Key Management for Static Wireless Sensor Networks With Node Adding
    F. Gandino, B. Montrucchio, M. Rebaudengo
    DOI: 10.1109/TII.2013.2288063
    KEYWORDS: random key distribution; transitory master key; wireless sensor network (wsn); key management
    ABSTRACT: Wireless sensor networks offer benefits in several applications but are vulnerable to various security threats, such as eavesdropping and hardware tampering. In order to reach secure communications among nodes, many approaches employ symmetric encryption. Several key management schemes have been proposed in order to establish symmetric keys. The paper presents an innovative key management scheme called Random seed distribution with transitory master key (RSDTMK), which adopts the random distribution of secret material and a transitory master key used to generate pairwise keys. The proposed approach addresses the main drawbacks of the previous approaches based on these techniques. Moreover, it overperforms the state-of-the-art protocols by providing always a high security level

  48. MIHST: A Hardware Technique for Embedded Microprocessor Functional On-line Self-Test
    P. Bernardi, M. Lyl, E. Sanchez, M. Sonza Reorda
    DOI: 10.1109/TC.2013.165

  49. On the Automatic Generation of Optimized Software-Based Self-Test Programs for VLIW Processors
    D. Sabena, M. Sonza Reorda, L. Sterpone
    DOI: 10.1109/TVLSI.2013.2252636
    KEYWORDS: test program generation; vliw; software-based self-test
    ABSTRACT: Very Long Instruction Word (VLIW) processors are increasingly employed in a large range of embedded signal processing applications, mainly due to their ability to provide high performances with reduced clock rate and power consumption. At the same time, there is an increasing request for efficient and optimal test techniques able to detect permanent faults in VLIW processors. Software-Based Self-Test (SBST) methods are a consolidated and effective solution to detect faults into a processor both at the end of the production phase or during the operational life; however, when traditional SBST techniques are applied to VLIW processors, they may result to be ineffective (especially in terms of size and duration), due to their inabilitytoexploittheparallelismintrinsicinthesearchitectures. In this paper we present a new method for the automatic generation of efficient test programs specifically oriented to VLIW processors. The method starts from existing test programs based on generic SBST algorithms and automatically generates effective test programs able to reach the same fault coverage, while minimizing the test duration and the test code size. The method consists of four parametric phases and can deal with different VLIW processor models. The main goal of the paper is to show that in the case of VLIW processors it is possible to automatically generate an effective test program able to achieve high fault coverage with minimal test time and required resources. Experimental data gathered on a case study demonstrate the effectiveness of the proposed approach: results show that this method is able to exploit the intrinsic parallelism of the VLIW processor, taming the growth in size andduration of the test program when the processor size grows

  50. Performance analysis of reliable flooding in duty-cycle wireless sensor networks
    L. Zhang, R. Ferrero, E. Sanchez, M. Rebaudengo
    DOI: 10.1002/ett.2556
    ABSTRACT: Wireless sensor network (WSN) is an emerging technology widely applied in modern applications. The resource limitations and the peculiarity of broadcast communication have made traditional flooding methods suffering severe performance degradation if directly applied to duty-cycle WSNs in which each node auto-activates for a brief interval and stays dormant most of time. In this work, a theoretical performance analysis of acknowledgement (ACK)-based and nonacknowledgement (NoACK)-based transmission mechanisms is presented. The evaluation considers both a point-to-point model and a point-to-multipoint one. Furthermore, the opportunistic flooding algorithm, which considers the effects of both duty cycle and unreliable wireless links of WSN, is implemented and evaluated considering both the ACK-based and NoACK-based transmission mechanisms. A solid framework is proposed in order to optimise the flooding in dutycycle WSNs according to the network requirements. Extensive simulations show that ACK-based and NoACK-based implementations produce a similar performance on the flooding delay, but with significantly different costs on the energy consumption

  51. Recovery Time and Fault Tolerance Improvement for Circuits mapped on SRAM-based FPGAs
    S. Ullah Anees
    DOI: 10.1007/s10836-014-5463-7
    KEYWORDS: triple modular redundancy (tmr); partial and dynamic reconfiguration; single event upset
    ABSTRACT: The rapid adoption of FPGA-based systems in space and avionics demands dependability rules from the design to the layout phases to protect against radiation effects. Triple Modular Redundancy is a widely used fault tolerance methodology to protect circuits against radiation-induced Single Event Upsets implemented on SRAM-based FPGAs. The accumulation of SEUs in the configuration memory can cause the TMR replicas to fail, requiring a periodic write-back of the configuration bit-stream. The associated system downtime due to scrubbing and the probability of simultaneous failures of two TMR domains are increasing with growing device densities. We propose a methodology to reduce the recovery time of TMR circuits with increased resilience to Cross-Domain Errors. Our methodology consists of an automated tool-flow for fine-grain error detection, error flags convergence and non-overlapping domain placement. The fine-grain error detection logic identifies the faulty domain using gate-level functions while the error flag convergence logic reduces the overwhelming number of flag signals. The non-overlapping placement enables selective domain reconfiguration and greatly reduces the number of Cross-Domain Errors. Our results demonstrate an evident reduction of the recovery time due to fast error detection time and selective partial reconfiguration of faulty domains. Moreover, the methodology drastically reduces Cross-Domain Errors in Look-Up Tables and routing resources. The improvements in recovery time and fault tolerance are achieved at an area overhead of a single LUT per majority voter in TMR circuits

  52. Reliability Evaluation of Embedded GPGPUs for Safety Critical Applications
    D. Sabena, L. Sterpone, L. Carro, P. Rech
    DOI: 10.1109/TNS.2014.2363358
    KEYWORDS: single-event effects; radiation testing; gpgpu
    ABSTRACT: Thanks to the capability of efficiently executing massive computations in parallel, General Purpose Graphic Processing Units (GPGPUs) have begun to be preferred to CPUs for several parallel applications in different domains. Two are the most relevant fields in which, recently, GPGPUs have begun to be employed: High Performance Computing (HPC), and embedded systems. The reliability requirements are different in these two applications domain. In order to be employed in safety-critical applications, GPGPUs for embedded systems must be qualified as reliable. In this paper, we analyze through neutron irradiation typical parallel algorithms for embedded GPGPUs and we evaluate their reliability. We analyze how caches and threads distributions affect the GPGPU reliability. The data have been acquired through neutron test experiments, performed at the VESUVIO neutron facility at ISIS. The obtained experimental results show that, if the L1 cache of the considered GPGPU is disabled, the algorithm execution is most reliable. Moreover, it is demonstrated that during a FFT execution most errors appear in the stages in which the GPGPU is completely loaded as the number of instantiated parallel tasks is higher

  53. Software-Based Hardening Strategies for Neutron Sensitive FFT Algorithms on GPUs
    L. Pilla, P. Rech, F. Silvestri, C. Frost, P. Navaux, M. Sonza, L. Carro
    DOI: 10.1109/TNS.2014.2301768
    ABSTRACT: In this paper we assess the neutron sensitivity of Graphics Processing Units (GPUs) when executing a Fast Fourier Transform (FFT) algorithm, and propose specific software-based hardening strategies to reduce its failure rate. Our research is motivated by experimental results with an unhardened FFT that demonstrate a majority of multiple errors in the output in the case of failures, which are caused by data dependencies. In addition, the use of the built-in error-correction code (ECC) showed a large overhead, and proved to be insufficient to provide high reliability. Experimental results with the hardened algorithm show a two orders of magnitude failure rate improvement over the original algorithm (one order of magnitude over ECC) and an overhead 64% smaller than ECC

  54. The impact of topology on energy consumption for collection tree protocols: An experimental assessment through evolutionary computation
    D. Bucur, G. Iacca, G. Squillero, A. Tonda
    DOI: 10.1016/j.asoc.2013.12.002
    KEYWORDS: collection tree protocol; multihoplqi; wireless sensor networks; evolutionary algorithms; routing protocols; verification; energy consumption
    ABSTRACT: The analysis of worst-case behavior in wireless sensor networks is an extremely difficult task, due to the complex interactions that characterize the dynamics of these systems. In this paper, we present a new methodology for analyzing the performance of routing protocols used in such networks. The approach exploits a stochastic optimization technique, specifically an evolutionary algorithm, to generate a large, yet tractable, set of critical network topologies; such topologies are then used to infer general considerations on the behaviors under analysis. As a case study, we focused on the energy consumption of two well-known ad hoc routing protocols for sensor networks: the multi-hop link quality indicator and the collection tree protocol. The evolutionary algorithm started from a set of randomly generated topologies and iteratively enhanced them, maximizing a measure of "how interesting" such topologies are with respect to the analysis. In the second step, starting from the gathered evidence, we were able to define concrete, protocol-independent topological metrics which correlate well with protocols' poor performances. Finally, we discovered a causal relation between the presence of cycles in a disconnected network, and abnormal network traffic. Such creative processes were made possible by the availability of a set of meaningful topology examples. Both the proposed methodology and the specific results presented here - that is, the new topological metrics and the causal explanation - can be fruitfully reused in different contexts, even beyond wireless sensor networks

  55. A Geometric Distribution Reader Anti-collision protocol for RFID Dense Reader Environments
    M. Bueno-Delgado, R. Ferrero, F. Gandino, P. Pavon-Marino, M. Rebaudengo
    DOI: 10.1109/TASE.2012.2218101
    KEYWORDS: rfid; reader collision problems; sift geometric distribution function; epcglobal; etsi en 302 208
    ABSTRACT: Dense passive radio frequency identification (RFID) systems are particularly susceptible to reader collision problems, categorized by reader-to-tag and reader-to-reader collisions. Both may degrade the system performance decreasing the number of identified tags per time unit. Although many proposals have been suggested to avoid or handle these collisions, most of them are not compatible with current standards and regulations, require extra hardware and do not make an efficient use of the network resources. This paper proposes the Geometric Distribution Reader Anti-collision (GDRA), a new centralized scheduler that exploits the Sift geometric probability distribution function to minimize reader collision problems. GDRA provides higher throughput than the state-of-the-art proposals for dense reader environments and, unlike the majority of previous works, GDRA is compliant with the EPCglobal standard and ETSI EN 302 208 regulation, and can be implemented in real RFID systems without extra hardware

  56. A Novel Fault Tolerant and Run-Time Reconfigurable Platform for Satellite Payload Processing
    L. Sterpone, M. Porrmann, J. Hagemeyer
    DOI: 10.1109/TC.2013.80

  57. DCNS: an Adaptable High Throughput RFID Reader-to-Reader Anti-collision Protocol
    F. Gandino, R. Ferrero, B. Montrucchio, M. Rebaudengo
    DOI: 10.1109/TPDS.2012.208
    KEYWORDS: tdma; reader-to-reader collision; rfid
    ABSTRACT: The reader-to-reader collision problem represents a research topic of great recent interest for the radio frequency identification (RFID) technology. Among the state-of-the-art anticollision protocols, the ones that provide high throughput often have special requirements, such as extra hardware. This study investigates new high throughput solutions for static RFID networks without additional requirements. In this paper, two contributions are presented: a new configuration, called Killer, and a new protocol, called distributed color noncooperative selection (DCNS). The proposed configuration generates selfish behavior, thereby increasing channel utilization and throughput. DCNS fully exploits the Killer configuration and provides new features, such as dynamic priority management, which modifies the performance of the RFID readers when it is requested. Simulations have been conducted in order to analyze the effects of the innovations proposed. The proposed approach is especially suitable for low-cost applications with a priority not uniformly distributed among readers. The experimental analysis has shown that DCNS provides a greater throughput than the state-of-the-art protocols, even those with additional requirements (e.g., 16 percent better than NFRA)

  58. Evaluation of Single and Additive Interference Models for RFID Collisions
    L. Zhang, F. Gandino, R. Ferrero, M. Rebaudengo
    DOI: 10.1016/j.mcm.2013.01.011
    KEYWORDS: rfid; interference models; reader-to-reader collision
    ABSTRACT: RFID readers for passive tags suffer from reader-to-reader interference. Mathematical models of reader-to-reader interference can be categorized into single interference and additive interference models. Although it considers only the direct collisions between two readers, the single interference model is commonly adopted since it allows faster simulations. However, the additive interference model is more realistic since it captures the total interference from several readers. In this paper, an analysis of the two models is presented and a comparison between them is conducted according to several evaluation scenarios. Besides, the impact of the different parameters, including path loss exponent, SIR/SINR threshold and noise power, is evaluated for the two models

  59. Fast Power Evaluation for Effective Generation of Test Programs Maximizing Peak Power Consumption
    P. Bernardi, M. De Carvalho, E. Sanchez, M. Sonza Reorda, A. Bosio, L. Dilillo, M. Valka, P. Girard
    DOI: 10.1166/jolpe.2013.1259

  60. IEEE Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems Guest Editorial
    D. Prashant, M. Violante
    DOI: 10.1007/s10836-013-5390-z

  61. Power Consumption Versus Configuration SEUs in Xilinx Virtex-5 FPGAs
    A. Aloisio, V. Bocci, R. Giordano, V. Izzo, L. Sterpone, M. Violante
    DOI: 10.1109/TNS.2013.2273001

  62. SEL-UP: A CAD tool for the sensitivity analysis of radiation-induced Single Event Latch-Up
    L. Sterpone
    DOI: 10.1016/j.microrel.2013.07.104
    ABSTRACT: Space missions require extremely high reliable components that must guarantee correct functionality without incurring in catastrophic effects. When electronic devices are adopted in space applications, radiation hardened technology should be mandatorily adopted. In this paper we propose a novel method for analyzing the sensitivity with respect to Single Event Latch-up (SEL) in radiation hardened technology. Experimental results obtained comparing heavy-ion beam campaign demonstrated the feasibility of the proposed solution

  63. Trade-off between maximum cardinality of collision sets and accuracy of RFID reader-to-reader collision detection
    L. Zhang, F. Gandino, R. Ferrero, B. Montrucchio, M. Rebaudengo
    DOI: 10.1186/1687-3963-2013-10
    ABSTRACT: As the adoption of the radio-frequency identification (RFID) technology is increasing, many applications require a dense reader deployment. In such environments, reader-to-reader interference becomes a critical problem, so the proposal of effective anti-collision algorithms and their analysis are particularly important. Existing reader-to-reader anti-collision algorithms are typically analyzed using single interference models that consider only direct collisions. The additive interference models, which consider the sum of interferences, are more accurate but require more computational effort. The goal of this paper is to find the difference in accuracy between single and additive interference models and how many interference components should be considered in additive models. An in-depth analysis evaluates to which extent the number of the additive components in a possible collision affects the accuracy of collision detection. The results of the investigation shows that an analysis limited to direct collisions cannot reach a satisfactory accuracy, but the collisions generated by the addition of the interferences from a large number of readers do not affect significantly the detection of RFID reader-to-reader collisions

  64. A benchmark for cooperative coevolution
    A. Tonda, E. Lutton, G. Squillero
    DOI: 10.1007/s12293-012-0095-x
    ABSTRACT: Cooperative co-evolution algorithms (CCEA) are a thriving sub-field of evolutionary computation. This class of algorithms makes it possible to exploit more efficiently the artificial Darwinist scheme, as soon as an optimisation problem can be turned into a co-evolution of interdependent sub-parts of the searched solution. Testing the efficiency of new CCEA concepts, however, it is not straightforward: while there is a rich literature of benchmarks for more traditional evolutionary techniques, the same does not hold true for this relatively new paradigm. We present a benchmark problem designed to study the behavior and performance of CCEAs, modeling a search for the optimal placement of a set of lamps inside a room. The relative complexity of the problem can be adjusted by operating on a single parameter. The fitness function is a trade-off between conflicting objectives, so the performance of an algorithm can be examined by making use of different metrics. We show how three different cooperative strategies, Parisian Evolution, Group Evolution and Allopatric Group Evolution, can be applied to the problem. Using a Classical Evolution approach as comparison, we analyse the behavior of each algorithm in detail, with respect to the size of the problem

  65. A fair and high throughput reader-to-reader anticollision protocol in dense RFID networks
    R. Ferrero, F. Gandino, B. Montrucchio, M. Rebaudengo
    DOI: 10.1109/TII.2011.2176742
    KEYWORDS: rfid; reader-to-reader collision; fairness
    ABSTRACT: Supply chain is a typical scenario of exploiting Radio Frequency Identification (RFID) technology. Its growing use in all the supply chain areas makes the presence of many close RFID readers more common. In such environment, interferences among readers are critical. Many protocols have been proposed to reduce reader-to-reader collisions. Experimental data showed that the Neighbor Friendly Reader Anticollision (NFRA) protocol [1] maximizes the network throughput. However, it does not take into account the delay between the request and the granting of query tags, causing delays for some readers. This paper proposes two approaches to increase the fairness and ensure a high throughput for each reader. A theoretical analysis, supported by experimental simulations, demonstrates the improvements achieved

  66. An adaptive low-cost tester architecture supporting embedded memory volume diagnosis
    P. Bernardi, L. Ciganda
    DOI: 10.1109/TIM.2011.2179822
    KEYWORDS: access protocols; adaptive algorithm; automatic test equipment; built-in self-test (bist); fault diagnosis; semiconductor device testing; system on a chip (soc); test data compression
    ABSTRACT: This paper describes the working principle and an implementation of a low-cost tester architecture supporting volume test and diagnosis of built-in self-test (BIST)-assisted embedded memory cores. The described tester architecture autonomously executes a diagnosis-oriented test program, adapting the stimuli at run-time, based on the collected test results. In order to effectively allow the tester architecture to interact with the devices under test with an acceptable time overhead, the approach exploits a special hardware module to manage the diagnostic process. Embedded static RAMs equipped with diagnostic BISTs and IEEE 1500 wrappers were selected as case study; experimental results show the feasibility of the approach when having a field-programmable gate array available on the tester and its effectiveness in terms of diagnosis time and required tester memory with respect to traditional testers executing diagnosis procedures by means of software running on the host computer

  67. An algorithmic and architectural study on Montgomery exponentiation in RNS
    F. Gandino, F. Lamberti, G. Paravati, J. Bajard, P. Montuschi
    DOI: 10.1109/TC.2012.84
    ABSTRACT: The modular exponentiation on large numbers is computationally intensive. An effective way for performing this operation consists in using Montgomery exponentiation in the Residue Number System (RNS). This paper presents an algorithmic and architectural study of such exponentiation approach. From the algorithmic point of view, new and state-of-the-art opportunities that come from the reorganization of operations and precomputations are considered. From the architectural perspective, the design opportunities offered by well-known computer arithmetic techniques are studied, with the aim of developing an efficient arithmetic cell architecture. Furthermore, since the use of efficient RNS bases with a low Hamming weight are being considered with ever more interest, four additional cell architectures specifically tailored to these bases are developed and the tradeoff between benefits and drawbacks is carefully explored. An overall comparison among all the considered algorithmic approaches and cell architectures is presented, with the aim of providing the reader with an extensive overview of the Montgomery exponentiation opportunities in RNS

  68. On the use of embedded debug features for permanent and transient fault resilience in microprocessors
    M. Portela-Garcia, M. Grosso, M. Gallardo-Campos, M. Sonza Reorda, L. Enterna, M. Garcia-Valderas, C. Lopez-Ongil
    DOI: 10.1016/j.micpro.2012.02.013
    KEYWORDS: microprocessor; on-line test; debug infrastructure; error detection
    ABSTRACT: Microprocessor-based systems are employed in an increasing number of applications where dependability is a major constraint. For this reason detecting faults arising during normal operation while introducing the least possible penalties is a main concern. Different forms of redundancy have been employed to ensure error-free behavior, while error detection mechanisms can be employed where some detection latency is tolerated. However, the high complexity and the low observability of microprocessors' internal resources make the identification of adequate on-line error detection strategies a very challenging task, which can be tackled at circuit or system level. Concerning system-level strategies, a common limitation is in the mechanism used to monitor program execution and then detect errors as soon as possible, so as to reduce their impact on the application. In this work, an on-line error detection approach based on the reuse of available debugging infrastructures is proposed. The approach can be applied to different system architectures profiting from the debug trace port available in most of current microprocessors to observe possible misbehaviors. Two microprocessors have been used to study the applicability of the solution, LEON3 and ARM7TDMI. Results show that the presented fault detection technique enhances observability and thus error detection abilities in microprocessor-based systems without requiring modifications on the core architecture

  69. Software-Based Testing for System Peripherals
    M. Grosso, W. Perez Holguin, E. Sanchez, M. Sonza Reorda, A. Tonda, J. Velasco Medina
    DOI: 10.1007/s10836-012-5287-2
    KEYWORDS: system peripheral functional testing dma controller interrupt controller sbst
    ABSTRACT: Software-based self-testing strategies have been mainly proposed to tackle microprocessor testing, but may also be applied to peripheral testing. However, testing system peripherals (e.g., DMA controllers, interrupt controllers, and internal counters) is a challenging task, since their observability and controllability are even more reduced when compared to microprocessors and to peripherals devoted to I/O communication (e.g., serial or parallel ports). In this paper an approach to develop functional tests for system peripherals is proposed. The presented methodology requires two correlated phases: module configuration and module operation. The first one prepares the peripheral to work in the different operation modes, whereas the second one is in charge of exciting the whole device and observing its behavior. We propose a methodology that guides the test engineer in building a compact set of test programs able to reach high structural fault coverage levels in a short time. Experimental results demonstrating the method effectiveness for two real-world case studies are finally reported

  70. A Low-Cost Solution for Deploying Processor Cores in Harsh Environments
    M. Sonza Reorda, M. Violante, C. Meinhardt, R. Reis
    DOI: 10.1109/TIE.2011.2134054

  71. A Parallel Tester Architecture for Accelerometerand Gyroscope MEMS Calibration and Test
    L. Ciganda, P. Bernardi, M. Sonza Reorda, D. Barbieri, L. Bonaria, R. Losco, L. Marcigot, M. Straiotto
    DOI: 10.1007/s10836-011-5210-2
    KEYWORDS: mems testing-calibration; accelerometer; gyroscope; automatic test system; ats; adaptive fpga-based ate architecture
    ABSTRACT: This paper describes a tester architecture for Accelerometer and Gyroscope Micro-ElectroMechanical System (MEMS) devices test and calibration, allowing increased parallelism rate and process accuracy. The proposed tester architecture tackles some critical issues related to MEMS testing, such as mitigating mechanical concerns that potentially impact on the equipment Mean Time Between Maintenance and guaranteeing a sufficient number of measurements in the time unit. The proposed strategy consists in an innovative and low cost tester resource partitioning that overcomes current limitations to multisite Accelerometer and Gyroscope MEMS testing. A tester prototype was implemented exploiting FPGAs; feasibility and effectiveness of the proposed methodology was demonstrated on commercial accelerometer and gyroscope MEMS devices

  72. An Analytical Model of the Propagation Induced Pulse Broadening (PIPB) Effects on Single Event Transient in Flash-based FPGAs
    L. Sterpone, N. Battezzati, F. Lima Kastensmidt, R. Chipana
    DOI: 10.1109/TNS.2011.2161886
    KEYWORDS: fpga; single event transient (set); fault injection; processor; static analysis

  73. Application-oriented SEU cross-section of aprocessor soft core for Atmel RHBD FPGAs
    N. Battezzati, F. Margaglia, M. Violante, F. Decuzzi, D. Merodio Codinachs, B. Bancelin
    DOI: 10.1109/TNS.2010.2103326

  74. Artificial evolution in computer aided design: from the optimization of parameters to the creation of assembly programs
    G. Squillero
    COMPUTING, 2011
    DOI: 10.1007/s00607-011-0157-9
    KEYWORDS: evolutionary computation. microprocessors; post-silicon verification; speed paths
    ABSTRACT: Evolutionary computation has been little, but steadily, used in the CAD community during the past 20 years. Nowadays, due to their overwhelming complexity, significant steps in the validation of microprocessors must be performed on silicon, i.e., running experiments on physical devices after tape-out. The scenario created new space for innovative heuristics. This paper shows a methodology based on an evolutionary algorithm that can be used to devise assembly programs suitable for a range of on-silicon activities. The paper describes how to take into account complex hardware characteristics and architectural details. The experimental evaluation performed on two high-end Intel microprocessors demonstrates the potentiality of this line of research

  75. Coping With the Obsolescence of Safety - or Mission-Critical Embedded Systems Using FPGAs
    H. Guzman-Miranda, L. Sterpone, M. Violante, M. Aguirre, M. Gutierrez-Rizo
    DOI: 10.1109/TIE.2010.2050291

  76. Functional Verification of DMA Controllers
    M. Grosso, H. Perez, D. Ravotto, E. Sanchez, M. Sonza Reorda, A. Tonda, J. Velasco Medina
    DOI: 10.1007/s10836-011-5219-6
    KEYWORDS: functional verification; peripheral; simulation; dma controller

  77. Increasing pattern recognition accuracy for chemical sensing by evolutionary based drift compensation
    S. Di Carlo, M. Falasconi, E. Sanchez, A. Scionti, G. Squillero, A. Tonda
    DOI: 10.1016/j.patrec.2011.05.019
    KEYWORDS: electronic nose; chemical sensors; bioinformatics; classification systems; evolutionary strategy; sensor drift
    ABSTRACT: Artificial olfaction systems, which mimic human olfaction by using arrays of gas chemical sensors combined with pattern recognition methods, represent a potentially low-cost tool in many areas of industry such as perfumery, food and drink production, clinical diagnosis, health and safety, environmental monitoring and process control. However, successful applications of these systems are still largely limited to specialized laboratories. Sensor drift, i.e., the lack of a sensor's stability over time, still limits real in dustrial setups. This paper presents and discusses an evolutionary based adaptive drift-correction method designed to work with state-of-the-art classification systems. The proposed approach exploits a cutting-edge evolutionary strategy to iteratively tweak the coefficients of a linear transformation which can transparently correct raw sensors' measures thus mitigating the negative effects of the drift. The method learns the optimal correction strategy without the use of models or other hypotheses on the behavior of the physical chemical sensors

  78. Layout-Aware Multi-Cell Upsets Effects Analysis on TMR Circuits Implemented on SRAM-Based FPGAs
    L. Sterpone, M. Violante, A. Panariti, A. Bocquillo, F. Miller, N. Buard, A. Manuzzato, S. Gerardin, A. Paccagnella
    DOI: 10.1109/TNS.2011.2161887

  79. Probabilistic DCS: An RFID reader-to-reader anti-collision protocol
    F. Gandino, R. Ferrero, B. Montrucchio, M. Rebaudengo
    DOI: 10.1016/j.jnca.2010.04.007
    KEYWORDS: rfid; reader-to-reader collision
    ABSTRACT: The wide adoption of radio frequency identification (RFID) for applications requiring a large number of tags and readers makes critical the reader-to-reader collision problem. Various anti-collision protocols have been proposed, but the majority require considerable additional resources and costs. Distributed color system (DCS) is a state-of-the-art protocol based on time division, without noteworthy additional requirements. This paper presents the probabilistic DCS (PDCS) reader-to-reader anti-collision protocol which employs probabilistic collision resolution. Differently from previous time division protocols, PDCS allows multichannel transmissions, according to international RFID regulations. A theoretical analysis is provided in order to clearly identify the behavior of the additional parameter representing the probability. The proposed protocol maintains the features of DCS, achieving more efficiency. Theoretical analysis demonstrates that the number of reader-to-reader collisions after a slot change is decreased by over 30%. The simulation analysis validates the theoretical results, and shows that PDCS reaches better performance than state-of-the-art reader-to-reader anti-collision protocols

  80. A Framework for Automated Detection of Power-Related Software Errors in Industrial Verification Processes
    S. Gandini, W. Ruzzarin, E. Sanchez, G. Squillero, A. Tonda
    DOI: 10.1007/s10836-010-5184-5
    KEYWORDS: diagnostics; evolutionary algorithms; mobile phones; power consumption; software testing; testing tools
    ABSTRACT: The complexity of cell phones is continually increasing, with regards to both hardware and software parts. As many complex devices, their components are usually designed and verified separately by specialized teams of engineers and programmers. However, even if each isolated part is working flawlessly, it often happens that bugs in one software application arise due to the interaction with other modules. Those software misbehaviors become particularly critical when they affect the residual battery life, causing power dissipation. An automatic approach to detect power-affecting software defects is proposed. The approach is intended to be part of a qualifying verification plan and complete human expertise. Motorola, always at the forefront of researching innovations in the product development chain, experimented the approach on a mobile phone prototype during a partnership with Politecnico di Torino. Software errors unrevealed by all human-designed tests have been detected by the proposed framework, two out of three critical from the power consumption point of view, thus enabling Motorola to further improve its verification plans. Details of the tests and experimental results are presented

  81. A Hybrid Approach for Detection and Correction of Transient Faults in SoCs
    P. Bernardi, M. Grosso, L. Bolzani Poehls, M. Sonza Reorda
    DOI: 10.1109/TDSC.2010.33
    ABSTRACT: Critical applications based on Systems-on-Chip (SoCs) require suitable techniques that are able to ensure a sufficient level of reliability. Several techniques have been proposed to improve fault detection and correction capabilities of faults affecting SoCs. This paper proposes a hybrid approach able to detect and correct the effects of transient faults in SoC data memories and caches. The proposed solution combines some software modifications, which are easy to automate, with the introduction of a hardware module, which is independent of the specific application. The method is particularly suitable to fit in a typical SoC design flow and is shown to achieve a better trade-off between the achieved results and the required costs than corresponding purely hardware or software techniques. In fact, the proposed approach offers the same fault-detection and -correction capabilities as a purely software-based approach, while it introduces nearly the same low memory and performance overhead of a purely hardware-based one

  82. A new Timing Driven Placement Algorithm for Dependable Circuits on SRAM-based FPGAs
    L. Sterpone
    DOI: 10.1145/1857927.1857934
    KEYWORDS: hardware; arithmetic and logic structures; reliability; testing and fault tolerance

  83. Analysis of SET propagation in Flash-based FPGAs by means of electrical pulse injection
    L. Sterpone, N. Battezzati, V. Ferlet-Cavrois
    DOI: 10.1109/TNS.2010.2043686
    KEYWORDS: characterization; flash-based fpgas; single event transients (sets)
    ABSTRACT: Advanced digital circuits are increasingly sensitive to single event transients (SETs) phenomena. Technology scaling has resulted in a greater sensitivity to single event effects (SEEs) and more in particular to SET propagation, since transients may be generated and propagated through the circuit logic, leading to behavioral errors of the affected circuit. When circuits are implemented on Flash-based FPGAs, SETs generated in the combinational logic resources are the main source of critical behavior. In this paper, we developed a technique based on electrical pulse injection for the analysis of SETs propagation within logic resources of Flash-based FPGAs. We outline logic schematic that allows the injection of different SET pulses. We performed several experimental analyses. We characterized the basic logic gates used by circuits implemented on Flash-based FPGAs evaluating the effect on logic-chains of real lengths. Additionally, we performed an effective analysis evaluating the SET propagation through microprocessor logic paths. Results demonstrated the possibility of mitigating SET-broadening effects by acting on physical place and route constraints

  84. Boosting Software Fault Injection for Dependability Analysis of Real-Time Embedded Applications
    G. Cabodi, M. Murciano, M. Violante
    DOI: 10.1145/1880050.1880060
    ABSTRACT: The design of complex embedded systems deployed in safety-critical or mission-critical applications mandates the availability of methods to validate the system dependability across the whole design flow. In this article we introduce a fault injection approach, based on loadable kernel modules and running under the Linux operating system, which can be adopted as soon as a running prototype of the systems is available. Moreover, for the purpose of decoupling dependability analysis from hardware availability, we also propose the adoption of hardware virtualization. Extensive experimental results show that statistical analysis made on top of virtual prototypes are in good agreement with the information disclosed by fault detection trends of real platforms, even under real-time constraints

  85. Design Validation of Multithreaded Processors using Threads Evolution
    D. Ravotto, E. Sanchez, M. Sonza Reorda, G. Squillero

  86. Evaluating the Impact of DFM Library Optimizations on Alpha-induced SEU Sensitivity in a Microprocessor Core
    D. Appello, M. Grosso, D. Loparco, F. Melchiori, A. Paccagnella, P. Rech, M. Sonza Reorda
    DOI: 10.1109/TNS.2010.2049119
    ABSTRACT: This paper presents and discusses the results of Alpha Single Event Upset (SEU) tests on an embedded 8051 microprocessor core implemented using three different standard cell libraries. Each library is based on a different Design for Manufacturability (DfM) optimization strategy; our goal is to understand how these strategies may affect the device sensitivity to alpha-induced Soft Errors. The three implementations are tested resorting to advanced Design for Testability (DfT) methodologies and radiation experiments results are compared. Electrical simulations of flip-flops are finally performed to propose physical motivations to the observed phenomena

  87. Exploiting an infrastructure-intellectual property for systems-on-chip test, diagnosis and silicon debug
    P. Bernardi, M. Grosso, M. Rebaudengo, M. Sonza Reorda
    DOI: 10.1049/iet-cdt.2008.0122

  88. Microprocessor Software-Based Self-Testing
    M. Psarakis, D. Gizopoulos, E. Sanchez, M. Sonza Reorda
    DOI: 10.1109/MDT.2010.5
    KEYWORDS: collaudo circuiti integrati vlsi
    ABSTRACT: This article discusses the potential role of software-based self-testing in the microprocessor test and validation process, as well as its supplementary role in other classic functional- and structural-test methods. In addition, the article proposes a taxonomy for different SBST methodologies according to their test program development philosophy, and summarizes research approaches based on SBST techniques for optimizing other key aspects

  89. Microvesicles Derived from Adult Human Bone Marrow and Tissue Specific Mesenchymal Stem Cells Shuttle Selected Pattern of miRNAs
    F. Collino, M. Deregibus, S. Bruno, L. Sterpone, G. Aghemo, L. Viltono, C. Tetta, G. Camussi
    PLOS ONE, 2010
    DOI: 10.1371/journal.pone.0011803

  90. Tampering in RFID: A Survey on Risks and Defenses
    F. Gandino, B. Montrucchio, M. Rebaudengo
    DOI: 10.1007/s11036-009-0209-y
    ABSTRACT: RFID is a well-known pervasive technology, which provides promising opportunities for the implementation of new services and for the improvement of traditional ones. However, pervasive environments require strong efforts on all the aspects of information security. Notably, RFID passive tags are exposed to attacks, since strict limitations affect the security techniques for this technology. A critical threat for RFIDbased information systems is represented by data tampering, which corresponds to the malicious alteration of data recorded in the tag memory. The aim of this paper is to describe the characteristics and the effects of data tampering in RFID-based information systems, and to survey the approaches proposed by the research community to protect against it. The most important recent studies on privacy and security for RFID-based systems are examined, and the protection given against tampering is evaluated. This paper provides readers with an exhaustive overview on risks and defenses against data tampering, highlighting RFID weak spots and open issues