Analogue Imprecision In Mlp Training, Progress In Neural Processing, Vol 4
Peter Edwards · Alan F Murray
Aug 1996 · Progress In Neural ProcessingBook 4 · World Scientific
3.0star
4 reviewsreport
Ebook
192
Pages
family_home
Eligible
info
Sample
reportRatings and reviews aren’t verified Learn More
About this ebook
Hardware inaccuracy and imprecision are important considerations when implementing neural algorithms. This book presents a study of synaptic weight noise as a typical fault model for analogue VLSI realisations of MLP neural networks and examines the implications for learning and network performance. The aim of the book is to present a study of how including an imprecision model into a learning scheme as a“fault tolerance hint” can aid understanding of accuracy and precision requirements for a particular implementation. In addition the study shows how such a scheme can give rise to significant performance enhancement.
Series
Computers & technology
Ratings and reviews
3.0
4 reviews
5
4
3
2
1
A Google user
Flag inappropriate
Show review history
September 16, 2018
I love it
8 people found this review helpful
Anil Das
Flag inappropriate
July 2, 2021
AÀA BOSS NETWORK
Rate this ebook
Tell us what you think.
Reading information
Smartphones and tablets
Install the Google Play Books app for Android and iPad/iPhone. It syncs automatically with your account and allows you to read online or offline wherever you are.
Laptops and computers
You can listen to audiobooks purchased on Google Play using your computer's web browser.
eReaders and other devices
To read on e-ink devices like Kobo eReaders, you'll need to download a file and transfer it to your device. Follow the detailed Help Center instructions to transfer the files to supported eReaders.