1. A New Solution to On-Line Detection of Control Flow Errors
    B. Du, M. Sonza Reorda, L. Sterpone, L. Parra, M. Portela-Garcia, A. Lindoso, L. Entrena
    Proceedings of the 2014 IEEE 20th International On-Line Testing Symposium (IOLTS)
    KEYWORDS: on-line test; debug interface; control flow checking
    ABSTRACT: Transient faults can affect the behavior of electronic systems, and represent a major issue in many safety-critical applications. This paper focuses on Control Flow Errors (CFEs) and extends a previously proposed method, based on the usage of the debug interface existing in several processors/controllers. The new method achieves a good detection capability with very limited impact on the system development flow and reduced hardware cost: moreover, the proposed technique does not involve any change either in the processor hardware or in the application software, and works even if the processor uses caches. Experimental results are reported, showing both the advantages and the costs of the method

  2. A comparison of graphics processor architectures for RFID simulation
    R. Ferrero, B. Montrucchio, L. David, K. Ebrahim, L. Graglia, Giovanni Di Dio Iovino, M. Ribero
    DOI: 10.1109/NBiS.2014.37
    KEYWORDS: pervasive computing; parallel computing; nvidia cuda; opencl; gpgpu
    ABSTRACT: Graphics Processing Units (GPUs) have a huge number of cores to speed up graphical computations and they are being used in a wide area of general-purpose applications that require high performances. In this paper, GPU computing is exploited to model the signal propagation and the interference in large RFID systems, which are a promising solution for achieving pervasive computing since they offer the automatic object identification. The speedup of the parallel algorithm is evaluated with respect to a sequential version. Two popular frameworks for general-purpose computing on GPU are considered in the comparison, i.e. CUDA and OpenCL, and distinct implementations are provided for them, highlighting their differences in code optimization and performance

  3. A parallel fuzzy scheme to improve power consumption management in Wireless Sensor Networks
    M. Collotta, G. Scatà, S. Tirrito, R. Ferrero, M. Rebaudengo
    DOI: 10.1109/ETFA.2014.7005363
    ABSTRACT: Wireless Sensor Networks (WSNs) are increasingly used in different application fields thanks to several advantages such as cost-effectiveness, scalability, flexibility and selforganization. A hot research topic concerns the study of algorithms and mechanisms for reducing the power consumption of the nodes in order to maximize their lifetime. To this end, this paper proposes an approach based on two fuzzy controllers that determine the sleeping time and the transmission power. Simulation results reveal that the device lifetime is increased by 30% with respect to the use of fixed sleeping time and transmission power and by 25% with respect to a state-of-the-art work that adjusts only the sleeping time

  4. An effective approach to automatic functional processor test generation for small-delay faults
    R. Andreas, C. Lyl, S. Matthias, P. Bernardi, M. Sonza Reorda, B. Bernd
    Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014
    DOI: 10.7873/DATE2014.140
    ABSTRACT: Functional microprocessor test methods provide several advantages compared to DFT approaches, like reduced chip cost and at-speed execution. However, the automatic generation of functional test patterns is an open issue. In this work we present an approach for the automatic generation of functional microprocessor test sequences for small-delay faults based on Bounded Model Checking. We utilize an ATPG framework for small-delayfaults in sequential, non-scan circuits and propose a method for constraining the input space for generating functional test sequences (i.e., test programs). We verify our approach by evaluating the miniMIPS microprocessor. In our experiments we were able to reach over 97 % fault efficiency. To the best of our knowledge, this is the ?rst fully automated approach to functional microprocessor test for small-delay faults

  5. Analysis and mitigation of single event effects on flash-based FPGAS
    L. Sterpone, B. Du
    DOI: 10.1109/ETS.2014.6847804
    KEYWORDS: flash-based fpgas; place and route; single event effects; single event transients; single event upsets; static analysis
    ABSTRACT: In the present paper, we propose a new design flow for the analysis and the implementation of circuits on Flash-based FPGAs hardened against Single Event Effects (SEEs). The solution we developed is based on two phases: 1) an analyzer algorithm able to evaluate the propagations of SETs through logic gates; 2) a hardening algorithm able to place and route a circuit by means of optimal electrical filtering and selective guard gates insertions. The effectiveness of the proposed design flow has been evaluated by performing hardening on seven benchmark circuits and comparing the results using different implementation approaches on 130nm Flash-based technology. The obtained results have been validated against radiation-beam testing using heavy-ions and demonstrated that our solution is able to decrease the circuits sensitivity versus SEE by two orders of magnitude with a reduction of resource overhead of 83 % with respect to traditional mitigation approaches

  6. Diagnostic Test Generation for Statistical Bug Localization using Evolutionary Computation
    M. Gaudesi, M. Jenihhin, J. Raik, E. Sanchez, G. Squillero, V. Tihhomirov, R. Ubar
    Lecture Notes in Computer Science
    DOI: 10.1007/978-3-662-45523-4_35
    KEYWORDS: evolutionary computation; design error localization; automatic test pattern generation; diagnostics
    ABSTRACT: Verification is increasingly becoming a bottleneck in the process of designing electronic circuits. While there exists several verification tools that assist in detecting occurrences of design errors, or bugs, there is a lack of solutions for accurately pin-pointing the root causes of these errors. Statistical bug localization has proven to be an approach that scales up to large designs and is widely utilized both in debugging hardware and software. However, the accuracy of localization is highly dependent on the quality of the stimuli. In this paper we formulate diagnostic test set generation as a task for an evolutionary algorithm, and propose dedicated fitness functions that closely correlate with the bug localization capabilities. We perform experiments on the register-transfer level design of the Plasma microprocessor coupling an evolutionary test-pattern generator and a simulator for fitness evaluation. As a result, the diagnostic resolution of the tests is significantly improved

  7. Early Reliability Evaluation of a Biomedical System
    H. Hakobyan, P. Rech, M. Sonza Reorda, M. Violante
    2014 9th International Design & Test Symposium
    ABSTRACT: Early reliability evaluation for safety-critical applications is crucial, since it may allow to spot critical parts of the design and to introduce suitable countermeasures. In some domains it is common to adopt a design flow exploiting a high-level description of the system behavior and architecture; out of this description, suitable tools then automatically generate the software (and eventually the hardware) needed to perform the required tasks. This paper describes an enhanced version of such a design flow in which reliability is also considered and evaluated. The model of a pacemaker is developed and used for early estimation of its robustness with respect to a subset of the possible faults. The paper highlights why it is important to take into account the environment the target system is designed to interact with (in this case the heart), thus making possible to identify the most critical faults, based on the severity of their effects

  8. Effective emulation of permanent faults in ASICs through dynamically reconfigurable FPGAs
    E. Sanchez, L. Sterpone, U. Anees
    Proceedings of 24th International Conference on Field Programmable Logic and Applications (FPL), 2014
    DOI: 10.1109/FPL.2014.6927478

  9. Fault Injection in GPGPU Cores to Validate and Debug Robust Parallel Applications
    M. De Carvalho, D. Sabena, M. Sonza Reorda, L. Sterpone, P. Rech, L. Carro
    Proceedings of IEEE 20th International On-Line Testing Symposium (IOLTS)
    KEYWORDS: fault injection; reliable gpgpu applications; robust algorithms; transient faults
    ABSTRACT: General Purpose Graphic Processing Units (GPGPUs) are more efficient than CPUs for processing parallel data. Unfortunately, GPGPUs are sensible to radiation. Hence, several software mitigation techniques, as well as robust algorithms, are being developed to overcome reliability problems. In this paper we propose a software debugger-based fault injection mechanism to evaluate the resiliency of applications running on a GPGPU and to validate the software hardening techniques it possibly embeds. We report some experimental results gathered on selected case studies to show the proposed approach advantages and limitations

  10. Fault Injection in the Process Descriptor of a Unix-based Operating System
    B. Montrucchio, M. Rebaudengo, A.D. Velasco
    Proceedings of the 28th Defect and Fault Tolerance in VLSI and Nanotechnology Systems Symposium
    KEYWORDS: operating system; fault injection
    ABSTRACT: Transient faults in computer-based systems for which high availability is a strict requirement, originated from several sources, like high energy particles, are a major issue. Fault injection is a commonly used method to evaluate the sensitivity of such systems. The paper presents an evaluation of the effects of faults in the memory containing the process descriptor of a Unix-based Operating System. In particular the state field has been taken into consideration as the main target, changing the current state value into another one that could be valid or invalid. An experimental analysis has been conducted on a large set of different tasks, belonging to the operating system itself. Results of tests show that the state field in the process descriptor represents a critical variable as far as dependability is considered

  11. Fault injection and fault tolerance methodologies for assessing device robustness and mitigating against ionizing radiation
    A. Dan, L. Sterpone, Lopez-Ongil Celia
    Proceedings of the 19th IEEE European Test Symposium
    DOI: 10.1109/ETS.2014.6847812
    KEYWORDS: fpgas; reliability; heavy ions; test facility
    ABSTRACT: Traditionally, heavy ion radiation effects affecting digital systems working in safety critical application systems has been of huge interest. Nowadays, due to the shrinking technology process, Integrated Circuits became sensitive also to other kinds of radiation particles such as neutron that can exist at the earth surface and affects ground-level safety critical applications such as automotive or medical systems. The process of analyzing and hardening digital devices against soft errors implies rising the final cost due to time expensive fault injection campaigns and radiation tests, as well as reducing system performance due to the insertion of redundancy-based mitigation solutions. The main industrial problem arising is the localization of the critical elements in the circuit in order to apply optimal mitigation techniques. The proposal of this tutorial is to present and discuss different solutions currently available for assessing and implementing the fault tolerance of digital circuits, not only when the unique design description is provided but also at the component level, especially when Commercial-of-the-shelf (COTS) devices are selected

  12. GPGPUs: How to combine high computational power with high reliability
    L. Bautista Gomez, F. Cappello, L. Carro, N. Debardeleben, B. Fang, S. Gurumurthi, K. Pattabiraman, P. Rech, M. Sonza Reorda
    Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014
    DOI: 10.7873/DATE2014.354
    ABSTRACT: GPGPUs are used increasingly in several domains, from gaming to different kinds of computationally intensive applications. In many applications GPGPU reliability is becoming a serious issue, and several research activities are focusing on its evaluation. This paper offers an overview of some major results in the area. First, it shows and analyzes the results of some experiments assessing GPGPU reliability in HPC datacenters. Second, it provides some recent results derived from radiation experiments about the reliability of GPGPUs. Third, it describes the characteristics of an advanced fault-injection environment, allowing effective evaluation of the resiliency of applications running on GPGPUs

  13. High Quality System Level Test and Diagnosis
    A. Jutman, M. Sonza Reorda, Hans-Joachim Wunderlich
    2014 IEEE 23rd Asian Test Symposium
    DOI: 10.1109/ATS.2014.62
    ABSTRACT: This survey introduces into the common practices, current challenges and advanced techniques of high quality system level test and diagnosis. Specialized techniques and industrial standards of testing complex boards are introduced. The reuse for system test of design for test structures and test data developed at chip level is discussed, including the limitations and research challenges. Structural test methods have to be complemented by functional test methods. State-of-the-art and leading edge research for functional testing will be covered

  14. In-field testing of SoC devices: Which solutions by which players?
    A. Jacob, G. Xinli, T. Maclaurin, J. Rajski, G. Paul, D. Gizopoulos, M. Sonza Reorda
    2014 IEEE 32nd VLSI Test Symposium (VTS)
    DOI: 10.1109/VTS.2014.6818780
    ABSTRACT: In-field testing of SoC devices is increasingly important to face the dependability requirements of several application domains. Different solutions can be devised and adopted. We summarize the main solutions currently adopted by industry, identify the most critical open issues, and discuss important future trends

  15. Layout and radiation tolerance issues in high-speed links for TDAQ systems
    V. Bocci, M. Capodiferro, R. Giordano, V. Izzo
    Real Time Conference (RT), 2014 19th IEEE-NPSS
    DOI: 10.1109/RTC.2014.7097548

  16. On the Functional Test of the Register Forwarding and Pipeline Interlocking Unit in Pipelined Processors
    P. Bernardi, R. Cantoro, L. Ciganda, B. Du, E. Sanchez, M. Sonza Reorda, M. Grosso, O. Ballan
    Proceedigns on 14th International Workshop on Microprocessor Test and Verification (MTV), 2013
    DOI: 10.1109/MTV.2013.10
    KEYWORDS: microprocessor testing; functional testing
    ABSTRACT: When the result of a previous instruction is needed in the pipeline before it is available, a "data hazard" occurs. Register Forwarding and Pipeline Interlock (RF&PI) are mechanisms suitable to avoid data corruption and to limit the performance penalty caused by data hazards in pipelined microprocessors. Data hazards handling is part of the 1microprocessor control logic; its test can hardly be achieved with a functional approach, unless a specific test algorithm is adopted. In this paper we analyze the causes for the low functional testability of the RF&PI logic and propose some techniques able to effectively perform its test. In particular, we describe a strategy to perform Software-Based Self-Test (SBST) on the RF&PI unit. The general structure of the unit is analyzed, a suitable test algorithm is proposed and the strategy to observe the test responses is explained. The method can be exploited for test both at the end of manufacturing and in the operational phase. Feasibility and effectiveness of the proposed approach are demonstrated on both an academic MIPS-like processor and an industrial System-on-Chip based on the Power ArchitectureTM

  17. On the In-Field Test of Branch Prediction Units using the Correlated Predictor mechanism
    M. Gaudesi, S. Saleem, E. Sanchez, M. Sonza Reorda, E. Tanowe
    KEYWORDS: sbst; branch history table; branch prediction unit
    ABSTRACT: Branch Prediction Units (BPUs) are widely used to reduce the performance penalties caused by branch instructions in pipelined processors. BPUs may be implemented in different forms: the Branch History Table (BHT) is an effective solution when the goal is predicting the result of conditional branches. In this paper we propose a method to generate test programs able to detect faults affecting the memory existing within a BHT implementing the correlated predictors approach. Our method is particularly suited to be used for the in-field test of a processor and allows detecting any stuck-at fault in the BPU memory. The method does not require the detailed knowledge of the BPU implementation, but only relies on the key parameters of its architecture. We gathered experimental results using the SimpleScalar environment

  18. On the in-Field Functional Testing of Decode Units in Pipelined RISC Processors
    P. Bernardi, R. Cantoro, L. Ciganda, E. Sanchez, M. Sonza Reorda, S. De Luca, R. Meregalli, A. Sansonetti
    Proceedings of the 2014 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFT)
    KEYWORDS: software-based self-test; decode unit; instruction coverage; fault grading
    ABSTRACT: The paper is dealing with the in-field test of the decode unit of RISC processors through functional test programs following the SBST approach. The paper details a strategy based on instruction classification and manipulation, and signatures collection. The method does not require the knowledge of detailed implementation information (e.g., the netlist), but is based on the Instruction Set of the processor. The proposed method is evaluated on an industrial SoC device, which includes a PowerPC derived processor. Results demonstrate the efficiency and effectiveness of the strategy; the proposed solution reaches over 90% of stuck-at fault coverage while an instruction coverage based approach does not overcome 70%

  19. Permanent faults on LIN networks: On-line test generation
    A. Vaskova, M. Portela-Garcia, M. Garcia-Valderas, C. Lopez-Ongil, M. Sonza Reorda
    2014 IEEE 20th International On-Line Testing Symposium (IOLTS)
    DOI: 10.1109/IOLTS.2014.6873665

  20. Reconfigurable High Performance Architectures: How much are they ready for safety-critical applications
    D. Sabena, L. Sterpone, M. Schölzel, T. Koal, H. Vierhaus, S. Wong, R. Glein, F. Rittner, C. Stender, M. Porrmann, J. Hagemeyer
    Proceedings of 19th IEEE European Test Symposium (ETS)
    KEYWORDS: reconfigurable systems; vliw processor
    ABSTRACT: Reconfigurable architectures are increasingly employed in a large range of embedded applications, mainly due to their ability to provide high performance and high flexibility, combined with the possibility to be tuned according to the specific task they address. Reconfigurable systems are today used in several application areas, and are also suitable for systems employed in safety-critical environments. The actual development trend in this area is focused on the usage of the reconfigurable features to improve the fault tolerance and the self-test and the self-repair capabilities of the considered systems. The state-of-the-art of the reconfigurable systems is today represented by Very Long Instruction Word (VLIW) processors and reconfigurable systems based on partially reconfigurable SRAM-based FPGAs. In this paper, we present an overview and accurate analysis of these two type of reconfigurable systems. The content of the paper is focused on analyzing design features, fail-safe and reconfigurable features oriented to self-adaptive mitigation and redundancy approaches applied during the design phase. Experimental results reporting a clear status of the test data and fault tolerance robustness are detailed and commented

  21. Reducing SEU Sensitivity in LIN Networks: Selective and Collaborative Hardening Techniques
    A. Vaskova, A. Fabregat, M. Portela-García, M. García-Valderas, C. López-Ongil, M. Sonza Reorda
    2014 15th Latin American Test Workshop - LATW
    DOI: 10.1109/LATW.2014.6841924
    ABSTRACT: Digital electronic systems in automotive applications are in charge of different tasks, ranging from very critical control functions (e.g., airbag, ABS, ESP) to comfort services (e.g., handling of mirrors, seats, windows, wipers). Hardening these systems involves suitably trading off cost and reliability. Due to standards and regulations in the area, the reliability of subsystems involved even in the least critical applications has to be evaluated, and in most cases hardening has to be performed with very low extra cost. In this work, two approaches are proposed for hardening the LIN bus, which implements a serial communication network typically used in low-throughput and low-cost sub-systems in automotive applications. First, critical elements in LIN nodes are identified and some techniques to harden them are proposed following a selective hardening approach. Secondly, collaborative hardening techniques are proposed for reducing global sensitivity in a LIN network built with commercial devices, trying to achieve a high degree of robustness in the network with low cost solutions. We report some experimental results allowing evaluating the hardware cost and the robustness of the proposed techniques

  22. Soft Error Effects Analysis and Mitigation in VLIW Safety-Critical Applications
    D. Sabena, M. Sonza Reorda, L. Sterpone
    Proceedings of IFIP/IEEE 22nd International Conference on Very Large Scale Integration (VLSI-SoC)
    KEYWORDS: hardening technique; vliw processor; cross domain error
    ABSTRACT: VLIW architectures are widely employed in several embedded signal applications since they offer the opportunity to obtain high computational performances while maintaining reduced clock rate and power consumption. Recently, VLIW processors are being considered for employment in various embedded processing systems, including safety-critical ones (e.g., in the aerospace, automotive and rail transport domains). Terrestrial safety-critical applications based on newer nano-scale technologies raise increasing concerns about transient errors induced by neutrons. Therefore, techniques to effectively estimate and improve the reliability of VLIW processors are of great interest. In this paper, we present a novel technique aimed to further improve the efficiency of the Triple Modular Redundancy (TMR) hardening-technique applied at the software level on VLIW processors. In particular, we first experimentally demonstrate that the TMR-based software technique, when applied at the C code level, is not able to cope with most of the failures affecting user logic resources. Then, we propose a method able to analyze and modify the TMR-based code for a generic VLIW processor in order to improve the fault tolerance of the executed application without modifying the VLIW processor. In details, the proposed technique is able to reduce the number of cross-domain errors affecting the TMR-hardened code of a VLIW processor data path. We provide figures about performance and fault coverage for both the un-protected and protected versions of a set of benchmark applications, thus demonstrating the benefits and limitations of our approach

  23. Software-implemented Fault Injection in Operating System Kernel Mutex Data Structure
    B. Montrucchio, M. Rebaudengo, A.D. Velasco
    Proceedings of 5th IEEE Latin American Symposium on Circuits and Systems (LASCAS)
    DOI: 10.1109/LASCAS.2014.6820257
    KEYWORDS: software-implemented fault injection; mutex data structure; operating system
    ABSTRACT: Embedded and Computer-based systems are subject to transient errors originated from several sources, including the impact of high energy particles on sensitive areas of integrated circuits. The evaluation of the sensitivity of the applications to transient faults is a major issue. The paper presents a new approach for testing the effects of transient faults on the Operating System kernel, specifically focusing on kernel mutex data structure, a key component of the kernel. A Software-implemented Fault Injection tool able to inject faults guaranteeing the non-intrusiveness and repeatability of the fault injection campaign is proposed. An analysis of the results has been performed on a large set of mutexes, in order to evaluate their criticality, in particular during input/output operations. Experimental results, executed on a set of benchmarks programs, show the relevance of the effects of the transient faults on this set of variables. Moreover, a significant percentage of faults can lead to a damage of the system also producing an application failure

  24. TURAN: Evolving non-deterministic players for the iterated prisoner's dilemma
    M. Gaudesi, E. Piccolo, G. Squillero, A. Tonda
    Evolutionary Computation (CEC), 2014 IEEE Congress on
    DOI: 10.1109/CEC.2014.6900564
    ABSTRACT: The iterated prisoner's dilemma is a widely known model in game theory, fundamental to many theories of cooperation and trust among self-interested beings. There are many works in literature about developing efficient strategies for this problem, both inside and outside the machine learning community. This paper shift the focus from finding a "good strategy" in absolute terms, to dynamically adapting and optimizing the strategy against the current opponent. Turan evolves competitive non-deterministic models of the current opponent, and exploit them to predict its moves and maximize the payoff as the game develops. Experimental results show that the proposed approach is able to obtain good performances against different kind of opponent, whether their strategies can or cannot be implemented as finite state machines

  25. The tradeoffs between data delivery ratio and energy costs in wireless sensor networks
    D. Bucur, G. Iacca, G. Squillero, A. Tonda
    Proceedings of the 2014 conference on Genetic and evolutionary computation - GECCO '14
    DOI: 10.1145/2576768.2598384
    ABSTRACT: Wireless sensor network (WSN) routing protocols, e.g., the Collection Tree Protocol (CTP), are designed to adapt in an ad-hoc fashion to the quality of the environment. WSNs thus have high internal dynamics and complex global behavior. Classical techniques for performance evaluation (such as testing or verification) fail to uncover the cases of extreme behavior which are most interesting to designers. We contribute a practical framework for performance evaluation of WSN protocols. The framework is based on multi-objective optimization, coupled with protocol simulation and evaluation of performance factors. For evaluation, we consider the two crucial functional and non-functional performance factors of a WSN, respectively: the ratio of data delivery from the network (DDR), and the total energy expenditure of the network (COST). We are able to discover network topological configurations over which CTP has unexpectedly low DDR and/or high COST performance, and expose full Pareto fronts which show what the possible performance tradeoffs for CTP are in terms of these two performance factors. Eventually, Pareto fronts allow us to bound the state space of the WSN, a fact which provides essential knowledge to WSN protocol designers

  26. Towards Automated Malware Creation: Code Generation and Code Integration
    A. Cani, M. Gaudesi, E. Sanchez, G. Squillero, A. Tonda
    Towards Automated Malware Creation: Code Generation and Code Integration
    KEYWORDS: malware; virus; evolutionary algorithms; security
    ABSTRACT: The analogies between computer malware and biological viruses are more than obvious. The very idea of an artificial ecosystem where malicious software can evolve and autonomously find new, more effective ways of attacking legitimate programs and damaging sensitive information is both terrifying and fascinating. The paper proposes two different ways for exploiting an evolutionary algorithm to devise malware: the former targeting heuristic-based antivirus scanner; the latter optimizing a Trojan attack. Testing the stability of a system against a malware-based attack, or checking the reliability of the heuristic scan of anti-virus software against an original malware application could be interesting for the research community and advantageous to the IT industry. Experimental results shows the feasibility of the proposed approaches on simple real-world test cases

  27. Universal information distance for genetic programming
    M. Gaudesi, G. Squillero, A. Tonda
    Proceedings of the 2014 conference companion on Genetic and evolutionary computation companion - GECCO Comp '14
    DOI: 10.1145/2598394.2598440
    KEYWORDS: distance metric; algorithms; fitness sharing; individual encoding; symbolic regression; diversity preservation; genetic programming; measurements; experimental analysis
    ABSTRACT: This paper presents a genotype-level distance metric for Genetic Programming (GP) based on the symmetric difference concept: first, the information contained in individuals is expressed as a set of symbols (the content of each node, its position inside the tree, and recurring parent-child structures); then, the difference between two individuals is computed considering the number of elements belonging to one, but not both, of their symbol sets

  28. Validation of a tool for estimating the effects of Soft- Errors on modern SRAM-based FPGAs
    M. Desogus, L. Sterpone, D.M. Codinachs
    Proceedings of the 2014 IEEE 20th International On-Line Testing Symposium (IOLTS)
    KEYWORDS: fault tolerance; experimental validation; fpgas
    ABSTRACT: Predicting soft errors on SRAM-based FPGAs without a wasteful time-consuming or a high-cost has always been a very difficult goal. Among the available methods, we proposed an updated version of analytical approach to predict Single Event Effects (SEEs) based on the analysis of the circuit the FPGA implements. In this paper, we provide an experimental validation of this approach, by comparing the results it provides with a fault injection campaign. We adopted our analytical method for computing the error-rate of a design implemented on SRAM-based FPGA. Furthermore, we compared the obtained soft-error figure with the one measured by fault injection. Experimental analysis demonstrated the analytical method closely match the effective soft-error rates becoming a viable solution for the soft-error estimation at early design phases