HOME BACKGROUND PRODUCTS APPLICATIONS SERVICES SUPPORT LINKS

1. Where can I find this software package on the internet?

2. What capabilities do your software packages have that differentiate them from those developed elsewhere?

3. Why do your MLP networks have bypass weights connecting the input and output layers?

4. Why do your MLP networks have linear output layer activations instead of sigmoids?

5. What size networks can be designed using this package?

6. Why do you have separate software packages for regression and classification networks? Don't both types of network use the same format for training data and the same training algorithms?

7. How are desired outputs defined in this software?

8. What error function is being minimized during network training?

9. What is the error percentage that is printed out during fast training and functional link net training for classifiers?

10. Why does this software generate an MSE greater than MATLAB's?

11. Do you have any papers related to modular neural nets?

12. Do you have any papers related to fast training of MLPs?

13. Do you have any papers related to the analysis of trained neural networks?

14. Do you have any papers on neural net pruning?

15. Do you have any papers related to optimal processing using neural nets?

16. Do you have any papers related to the prediction of neural net performance?

17. Do you have any papers related to data pre-processing for neural nets?

18. Do you have any papers related to the training of functional link neural networks?

19. Do you have any papers on real-world applications of neural networks?

20. Does the data in the training file need to be normalized before using it to train any network?

21. What source code is included with the package?

1. WHERE CAN I FIND THIS SOFTWARE PACKAGE ON THE INTERNET?

At our research lab : www-ee.uta.edu/eeweb/ip/Software/Software.htm
Simtel : www.simtel.net under MIS (miscellaneous).

2. WHAT CAPABILITIES DO YOUR SOFTWARE PACKAGES HAVE THAT DIFFERENTIATE THEM FROM THOSE DEVELOPED ELSEWHERE?

These packages:

  • Includes a network sizing program, that estimates nonlinear network performance or classification error percentage as a function of size (number of hidden units for the MLP or clusters for the PLN or NNC). This program usually, but not always, works.
  • Includes a fast MLP training program unlike others that are available. This technique is about 3 times faster than full-blown (without heuristic changes for speeding it up) conjugate gradient training, and performs much better. Training is 10 to 1000 times faster than back propagation. The algorithm is also better and faster than Levenburg-Marquardt and scales better as well.
  • Includes a sophisticated feature selection option that accurately finds the best input feature subset of sizes 1 to N (number of inputs), and estimates the classification or approximation error for each subset.
  • Contains conventional nonlinear networks, the piece-wise linear network (PLN) and nearest neighbor classifier (NNC), for comparison with the neural nets.
  • Allows users to generate many different-size networks simultaneously. For the MLP, one prunes a large network and saves smaller ones of different size simultaneously. For Other networks, training and pruning are combined.
  • Is highly automated and requires very little user input.
  • Allows unlimited numbers of training patterns.

3. WHY DO YOUR MLP NETWORKS HAVE BYPASS WEIGHTS CONNECTING THE INPUT AND OUTPUT LAYERS ?

Our MLPs are fully-connected because:

  • MLPs with bypass weights require fewer hidden units, and train faster.
  • Unlike in cascade-connected MLPs, the number of hidden units in fully-connected MLPs reflects the difficulty and the amount of nonlinearity of the training problem. In a stock market forecasting network, for example, a fully-connected network shows no improvement as hidden units are added, because of the inherent linearity of the problem. The corresponding cascade-connected MLP shows improvement as hidden units are added, gulling the user into believing the problem is nonlinear.

4. WHY DO YOUR MLP NETWORKS HAVE LINEAR OUTPUT LAYER ACTIVATIONS INSTEAD OF SIGMOIDS ?

Our MLPs have linear output layer activations in order to speed up training. It allows us to solve linear equations of the output weights. 

5. WHAT SIZE NETWORKS CAN BE DESIGNED USING THIS PACKAGE ?

Basic Version: Up to 20 inputs for all networks, 10 hidden units for the MLP, 10 modules for the PLN, 30 clusters for the NNC.

User Version: Up to 30 inputs for all networks, 20 hidden units for the MLP, 20 modules for the PLN, 50 clusters for the NNC.

Advanced Version: Up to 60 inputs for all networks, 50 hidden units for the MLP, 50 modules for the PLN, 120 clusters for the NNC.

Professional Version: Over 100 inputs for all networks, over 100 hidden units for the MLP, over 100 modules for the PLN, over 200 clusters for the NNC.

6. WHY DO YOU HAVE SEPARATE SOFTWARE PACKAGES FOR REGRESSION AND CLASSIFICATION NETWORKS ? DON’T BOTH TYPES OF NETWORK USE THE SAME FORMAT FOR TRAINING DATA AND THE SAME TRAINING ALGORITHMS ?

We have separate packages for classification and regression or estimation because:

  • Our training algorithms for classification and estimation networks have some important differences Unlike in the regression/estimation nets, the error criterion for classification nets iteratively approaches the probability of classification error.
  • Combining the two packages would make the result unnecessarily large.
  • Many people need to do estimation or classification but not both.

7. HOW ARE DESIRED OUTPUTS DEFINED IN THIS SOFTWARE ?

For classification, the number of outputs is M = Nc, where Nc is the number of classes. Let tpk denote the desired output for the pth training pattern and the kth output, and let ypk denote the actual output for the pth training pattern and the kth output. Given the correct class number (class ID) ic(p) for the pth pattern, the desired output tpk is 1 for the correct class (k = ic(p)) and 0 for other classes.

For regression, the number of desired outputs is M, which will be known by the user. The desired output for the pth training pattern and the kth output is denoted by tpk .

 

8. WHAT ERROR FUNCTION IS BEING MINIMIZED DURING NETWORK TRAINING?

 where Nv is the number of training patterns. MSE and error percentage are printed for each iteration.


9. WHAT IS THE ERROR PERCENTAGE THAT IS PRINTED OUT DURING FAST TRAINING AND FUNCTIONAL LINK NET TRAINING ?

Err = 100 x (number of patterns misclassified/Nv).

 

10. WHY DOES THIS SOFTWARE GENERATE AN MSE GREATER THAN MATLAB’S ?

Matlab’s “MSE” is MSE/M where M is the number of outputs. Our MSE is actually less than Matlab’s most of the time. 

 

11. DO YOU HAVE ANY PAPERS RELATED TO MODULAR NEURAL NETS ?

Yes.

  • K. Rohani and M.T. Manry, "Multi-Layer Neural Network Design Based on a Modular Concept," Journal of Artificial Neural Networks, vol. 1, no. 3, 1994, pp. 349-370.
  • S. Subbarayan, K.K. Kim, M.T. Manry, V. Devarajan, and H-H Chen, "Modular Neural Network Architecture Using Piecewise Linear Mapping," Conference Record of the Thirtieth Annual Asilomar Conference on Signals, Systems, and Computers, 11/3/96 to 11/6/96, vol. 2, pp. 1171-1175.
  • H. Chandrasekaran, K. Kim, and M.T. Manry, "Sizing of the Multilayer Perceptron via Modular Networks," Proceedings of NNSP'99, August 23-25, 1999, Madison, Wisconsin, pp. 215-224.
  • H. Chandrasekaran and M.T. Manry, "Convergent Design of a Piecewise Linear Neural Network," Proceedings of IJCNN'99., pp. 1339-1344.

12. DO YOU HAVE ANY PAPERS RELATED TO FAST TRAINING OF MLPS ?

Yes.

  • M.T. Manry, S.J. Apollo, L.S. Allen, W.D. Lyle, W. Gong, M.S. Dawson, and A.K. Fung, "Fast Training of Neural Networks for Remote Sensing," Remote Sensing Reviews, vol. 9, pp. 77-96, 1994.
  • H-H Chen, M.T. Manry, and H. Chandrasekaran, "A Neural Network Training Algorithm utilizing Multiple Sets of Linear Equations," Neurocomputing, April 1999, vol. 25, pp. 55-72
  • C. Yu and M.T. Manry, "A Modified Hidden Weight Optimization Algorithm for Feed Forward Neural Networks," Conference Record of the Thirty Sixth Annual Asilomar Conference on Signals, Systems, and Computers, November 2002, pp. 1034-1038.
  • Changhua Yu, Michael T. Manry, and Jiang Li, "Effects of nonsingular pre-processing on feed-forward network training ". International Journal of Pattern Recognition and Artificial Intelligence , Vol. 19, No. 2 (2005) pp. 217-247.
  • Changhua Yu, Michael T. Manry, and Jiang Li, “An Efficient Hidden Layer Training Method for Multilayer Perceptron”, accepted by NeuroComputing.

13. DO YOU HAVE ANY PAPERS RELATED TO THE ANALYSIS OF TRAINED NEURAL NETWORKS ?

Yes.

  • X. Jiang, Mu-Song Chen, and M.T. Manry, "Compact Polynomial Modeling of the Multi-Layer Perceptron," Conference Record of the Twenty-Sixth Annual Asilomar Conference on Signals, Systems, and Computers, Oct. 1992, vol 2, pp.791-795.
  • M.S. Chen and M.T. Manry, "Conventional Modeling of the Multi-Layer Perceptron Using Polynomial Basis Functions," IEEE Transactions on Neural Networks, Vol. 4, no. 1, pp. 164-166, January 1993.
  • X. Jiang, Mu-Song Chen, M.T. Manry, M.S. Dawson, A.K. Fung, "Analysis and Optimization of Neural Networks for Remote Sensing," Remote Sensing Reviews, vol. 9, pp. 97-114, 1994.
  • W. Gong, H.C. Yau, and M.T. Manry, "Non-Gaussian Feature Analyses Using a Neural Network," Progress in Neural Networks, vol. 2, 1994, pp. 253-269.

14. DO YOU HAVE ANY PAPERS ON NEURAL NET PRUNING ?

Yes.

  • X. Jiang, Mu-Song Chen, M.T. Manry, M.S. Dawson, A.K. Fung, "Analysis and Optimization of Neural Networks for Remote Sensing," Remote Sensing Reviews, vol. 9, pp. 97-114, 1994.
  • H. Chandrasekaran, H.H Chen, and M.T. Manry, "Pruning of Basis Functions in Nonlinear Approximators, NeuroComputing, vol. 34, 2000, pp. 29-53
  • F. J. Maldonado and M.T. Manry, "Optimal Pruning of Feed Forward Neural Networks Using the Schmidt Procedure", Conference Record of the Thirty Sixth Annual Asilomar Conference on Signals, Systems, and Computers., November 2002, pp. 1024-1028.
  • F.J. Maldonado, M.T. Manry, “A Pseudogenetic Algorithm For MLP design based upon the Schmidt Procedure,” in Proc. of the IASTED International Conference Neural Networks and Computational Intelligence, Cancun Mexico, May 19-21, 2003, pp. 197-202.
  • F. J. Maldonado, M. T. Manry, and Tae-Hoon Kim, "Finding Optimal Neural Network Basis Function Subsets Using the Schmidt Procedure," Proc. of IJCNN'03.
  • Jiang Li, M.T. Manry, Randall Wilson, and Changhua Yu "Prototype Based Classifier Design with Pruning". International Journal on Artificial Intelligence Tools, 2005.

15. DO YOU HAVE ANY PAPERS RELATED TO OPTIMAL PROCESSING USING NEURAL NETS ?

Yes.

  • S.J. Apollo, M.T. Manry, L.S. Allen, and W.D. Lyle, "Optimality of Transforms for Parameter Estimation," Conference Record of the Twenty-Sixth Annual Asilomar Conference on Signals, Systems, and Computers, Oct. 1992, vol. 1, pp. 294-298.
  • W. Liang, M.T. Manry, Q. Yu, S.J. Apollo, M.S. Dawson, and A.K. Fung, "Bounding the Performance of Neural Network Estimators, Given Only a Set of Training Data," Conference Record of the Twenty-Eighth Annual Asilomar Conference on Signals, Systems, and Computers, vol. 2, 10/31/94 to 11/2/94, pp.912-916.
  • W. Liang, M.T. Manry, S.J. Apollo, M.S. Dawson, and A.K. Fung, "Stochastic Cramer Rao Bounds for Non-Gaussian Signals and Parameters," Proceedings of ICASSP-95, vol. 5, May 1995, pp. 3367-3369.
  • M.S. Dawson, M.T. Manry, and A.K. Fung, "Information Retrieval from Remotely Sensed Data and a Method to Remove Parameter Estimator Ambiguity," Proc. of IGARSS'95, Firenze, Italy, July 10-14 1995, vol. 1, pp 691-693.
  • M.T. Manry, S.J. Apollo, and Q. Yu, "Minimum Mean Square Estimation and Neural Networks," Neurocomputing, vol. 13, September 1996, pp. 59-74.
  • M.T. Manry, C-H Hsieh, S.J. Apollo, M.S. Dawson, and A.K. Fung, "Cramer-Rao Maximum a Posteriori Bounds for Non-Gaussian Signals and Parameters," International Journal of Intelligent Control and Systems, vol. 1, no. 3, 1996, pp. 381-391.
  • C-H Hsieh, M.T. Manry, and H. Chandrasekaran, "Near-Optimal Flight Load Synthesis Using Neural Networks," Proceedings of NNSP'99, August 23-25, 1999, Madison, Wisconsin, pp. 535-544.

16. DO YOU HAVE ANY PAPERS RELATED TO THE PREDICTION OF NEURAL NET PERFORMANCE ?

Yes.

  • K.K. Kim and M.T. Manry, "A Complexity Algorithm for Estimating the Size of the Multilayer Perceptron," Conference Record of the Twenty-Ninth Annual Asilomar Conference on Signals, Systems, and Computers, 10/29/95 to 11/1/95, vol. 2, pp 899-903.
  • M.T. Manry, R. Shoults, and J. Naccarino, "An Automated System for Developing Neural Network Short Term Load Forecasters," Proceedings of the 58th American Power Conference, Chicago, Ill., April 9-11, 1996, vol. 1, pp. 237-241.
  • M.T. Manry, H. Chandrasekaran, and C-H Hsieh, "Signal Processing Applications of the Multilayer Perceptron," accepted as a book chapter for Handbook on Neural Network Signal Processing, edited by Yu Hen Hu and Jenq-Nenq Hwang, CRC Press, 2001.
  • Jiang Li, Michael T. Manry, Pramod Lakshmi Narasimha, and Changhua Yu, “Feature Selection Using a Piecewise Linear Network”, accepted by IEEE Trans. on Neural Networks.
  • Jiang Li, Jianhua Yao, Ronald M. Summers, Nicholas Petrick, Michael T. Manry, and Amy K. Hara, “An Efficient Feature Selection Algorithm for Computer-Aided Polyp Detection,” special issue of the International Journal on Artificial Intelligence Tools (IJAIT) to be published in 2006.

17. DO YOU HAVE ANY PAPERS RELATED TO DATA PRE-PROCESSING FOR NEURAL NETS ?

Yes. 

  • S.J. Apollo, M.T. Manry, L.S. Allen, and W.D. Lyle, "Optimality of Transforms for Parameter Estimation," Conference Record of the Twenty-Sixth Annual Asilomar Conference on Signals, Systems, and Computers, Oct. 1992, vol. 1, pp. 294-298.
  • Changhua Yu, Michael T. Manry, Jiang Li, and Tae-Hoon Kim, “Invariance of MLP Training to Input Feature Decorrelation”, to appear in Proceedings of the Seventeenth International Conference of the Florida AI Research Society, May 2004.
  • Changhua Yu, Michael T. Manry, and Jiang Li, "Effects of nonsingular pre-processing on feed-forward network training ". International Journal of Pattern Recognition and Artificial Intelligence , Vol. 19, No. 2 (2005) pp. 217-247.

18. DO YOU HAVE ANY PAPERS RELATED TO THE TRAINING OF FUNCTIONAL LINK NEURAL NETWORKS?

Yes.

  • H.C. Yau and M.T. Manry, "Iterative Improvement of a Gaussian Classifier," Neural Networks, Vol. 3, pp. 437-443, July 1990.
  • H.C. Yau and M.T. Manry, "Iterative Improvement of a Nearest Neighbor Classifier," Neural Networks, Vol. 4, Number 4, pp. 517-524, 1991.

19. DO YOU HAVE ANY PAPERS ON REAL-WORLD APPLICATIONS OF NEURAL NETWORKS?

Yes.

  • W. Gong, K.R. Rao, and M.T. Manry, "Progressive Image Transmission Using Self-Supervised Back propagation Neural Network," Journal of Electronic Imaging, Vol.1(1), pp. 88-94, January 1992. 
  • M.S. Dawson, A.K. Fung, and M.T. Manry, "Surface Parameter Retrieval Using Fast Learning Neural Networks," Remote Sensing Reviews, Vol. 7, pp. 1-18, 1993.
  • W. Gong, K.R. Rao, and M.T. Manry, "Progressive Image Transmission," IEEE Trans. on Circ. and Syst. for Video Technology, vol. 3, no. 6, October 1993, pp. 380-383.
  • Y. Saifullah and M.T. Manry, "Classification-Based Segmentation of ZIP Codes," IEEE Trans. on Systems, Man, and Cybernetics, vol. 23, no. 5, September/October 1993, pp.1437-1443.
  • R.R. Bailey, E.J. Pettit, R.T. Borochoff, M.T. Manry, and X. Jiang, "Automatic Recognition of USGS Land Use/Cover Categories Using Statistical and Neural Network Classifiers," Proceedings of SPIE OE/Aerospace and Remote Sensing, April 12-16, 1993, Orlando Florida.
  • K. Liu, S. Subbarayan, R.R.Shoults, M.T.Manry, C.Kwan, F.L.Lewis, and J.Naccarino, "Comparison of Very Short-Term Load Forecasting Techniques," IEEE Transactions on Power Systems, vol.11, no.2, May 1996, pp. 877-882.
  • W.E. Weideman, M.T. Manry, H.C. Yau, and W. Gong "Comparisons of a Neural Network and a Nearest Neighbor Classifier Via the Numeric Handprint Character Recognition Problem," IEEE Transactions on Neural Networks, Vol. 6, no. 6, pp. 1524-1530, November 1995.
  • M.S. Dawson, A.K. Fung, and M.T. Manry, "A Robust Statistical-based Estimator for Soil Moisture Retrieval from Radar Measurements," IEEE Transactions on Remote Sensing, vol. 35, no. 1, January 1997, pp. 57-67.
  • T-H Kim, V. Devarajan, and M.T. Manry, "Road Extraction from Aerial Images Using Neural Networks," Proceedings of the 1997 ACS/ASPRS Annual Convention, April 4-7, 1997, Seattle Washington, pp. 146-154.
  • D.S. Kimes, R.F. Nelson, M.T. Manry, and A.K. Fung, "Attributes of Neural Networks for Extracting Continuous Vegetation Variables from Optical and Radar Measurements," International Journal of Remote Sensing, vol. 19, no. 14, 1998, pp. 2639-2663.
  • M.T. Manry ,C. Subramanian, and J. Naccarino, "Reservoir Inflow Forecasting Using Neural Networks," Proceedings of the 61st American Power Conference, Chicago, Ill., April 6-8, 1999, vol. 1, pp. 237-241.
  • Jiang Li, Qilian Liang, and Michael. T. Manry, “Demodulation for Wireless ATM Network Using Modified SOM Network”, to appear in Proceedings of ICASSP 2004.
  • Jiang Li, Qilian Liang, and Michael. T. Manry, “Adaptive Channel Equalization for Satellite Communications with Multipath Based on Unsupervised Learning Algorithm”, in Proceedings of the 14th IEEE International Symposium on Personal, Indoor and Mobile Radio Communications, 2003.
  • Jiang Li, Qilian Liang, and Michael. T. Manry, “Co-channel Interference Suppression with Model Simplification in TDMA Systems”, submitted to Globecom 2004.

20. Does the data in the training file need to be normalized before using it to train any network?

All network training algorithms automatically perform normalization on the input training data file. The training algorithms estimate the input means and standard deviations and subtract the mean from the inputs, making them zero mean. Thus it is not required that the user obtain a training data file with normalized inputs in order to train a network.

21. What source code is included with the package?

The package contains original source code for training a network using K-Means and Self-Organizing Maps (SOMs). It also includes the source code for testing a saved network using Multi Layer Perceptron (MLP), Learning Vector Quantization (LVQ), Functional Link Network (FLN) and Piecewise Linear Networks (PLN).

 

Copyright © Neural Decision Lab LLC