International Journal of Artificial Intelligence and Neural Networks
Author(s) : K. BALACHANDRAN, NIRMAL LOURDH RAYAN S
Mathematical Optimization refers to finding the minimum or maximum value from a desired set of outcomes. This paper discusses about optimization in two levels. Levenberg-Marquardt is used for back propagation to minimize non-linear least square error using curve fitting. This minimization involves functional optimization to reduce error in neural network (NN) classification. The second level of optimization is on improving the performance of Levenberg-Marquardt algorithm (LMA) by using divide and conquer methods to parallelize computation. We make use of Fork/Join framework in Java which uses divide and conquer technique to split a task into many elementary subtasks and executing them in parallel. Additionally, the Fork/Join architecture uses work-stealing algorithm to effectively utilize the worker threads that have completed their tasks to steal tasks from other threads that are still busy. We have used standard UCI Machine Learning Repository dataset called Million Song Dataset for constructing the neural network. The target output will be the year of song’s publication and the input vector consists of the metadata and characteristics of audio (song). The effective speedup achieved for varying data sizes are estimated by comparing the performance of traditional LMA with parallelized LMA. We also study the rate of improvement in performance when the input data sample size is varied from 100 to 1,00,000. We have achieved over 300% steady gain in performance using thread level parallelism on LMA in a single workstation.