The library contains a number of interconnected Java packages that implement machine learning and artificial intelligence algorithms. These are artificial intelligence algorithms implemented for the kind of people that like to implement algorithms themselves.
- Converted the project to
gradle
fromant
- added logback to support logging messages to file and console
- created test classes with more wrapper iterations of underlying classes.
./gradlew clean build
Note: This will run all the tests and generate the results under output. Depending on the machine, it may take long time, since in some tests, 100 or even 1000 iterations are run. Run times may be around 40 minute or so.
This creates a jar file : build/libs/abagail-1.0-SNAPSHOT.jar
./gradlew clean build -x test
All the generated output is under "output".
There are 4 classes to execute various aspects of this homework. They are under src/test/java/example
-
EyeStateTestTest
perform neural network by replacing backpropagation with each of the four optimizers. -
TwoColorComparisonTester
-
TSPComparisonTest
-
FourPeaksComparisonTest
These classes can be executed by opening in an IDE And running each class individually. Or, we can execute them via gradle as,
./gradlew -Dtest.single=*EyeStateTestTest test
./gradlew -Dtest.single=*FourPeaksComparisonTest test
./gradlew -Dtest.single=*TSPComparisonTest test
./gradlew -Dtest.single=*TwoColorComparisonTester test
Notice the asterisk before the name of the class to wildcard the package name, since it is unconventional.
See Issues page.
- Fork it.
- Create a branch (
git checkout -b my_branch
) - Commit your changes (
git commit -am "Awesome feature"
) - Push to the branch (
git push origin my_branch
) - Open a Pull Request
- Enjoy a refreshing Diet Coke and wait
Hidden Markov Models
- Baum-Welch reestimation algorithm, scaled forward-backward algorithm, Viterbi algorithm
- Support for Input-Output Hidden Markov Models
- Write your own output or transition probability distribution or use the provided distributions, including neural network based conditional probability distributions
- Neural Networks
- Configurable error functions with sum of squares, weighted sum of squares
- Multiple activation functions with logistic sigmoid, linear, tanh, and soft max
- Choose your weight update rule with standard update rule, standard update rule with momentum, Quickprop, RPROP
- Online and batch training
- Support Vector Machines
- Support for linear, polynomial, tanh, radial basis function kernels
- Decision Trees
- Binary or all attribute value splitting
- Chi-square signifigance test pruning with configurable confidence levels
- Boosted decision stumps with AdaBoost
- K Nearest Neighbors
- KNN Classifier with weighted or non-weighted classification, customizable distance function
- Linear Algebra Algorithms
- Solve square systems, upper triangular systems, lower triangular systems, least squares
- Singular Value Decomposition, QR Decomposition, LU Decomposition, Schur Decomposition, Symmetric Eigenvalue Decomposition, Cholesky Factorization
- Make your own matrix decomposition with the easy to use Householder Reflection and Givens Rotation classes
- Optimization Algorithms
Randomized hill climbing, simulated annealing, genetic algorithms, and discrete dependency tree MIMIC
- Make your own crossover functions, mutation functions, neighbor functions, probability distributions, or use the provided ones.
- Optimize the weights of neural networks and solve travelling salesman problems
- Graph Algorithms
- Clustering Algorithms
- Data Preprocessing
- Convert from continuous to discrete, discrete to binary
- Reinforcement Learning