thoughtworksinc / deeplearning.scala Goto Github PK
View Code? Open in Web Editor NEWA simple library for creating complex neural networks
Home Page: http://deeplearning.thoughtworks.school/
License: Apache License 2.0
A simple library for creating complex neural networks
Home Page: http://deeplearning.thoughtworks.school/
License: Apache License 2.0
We should switch to https://github.com/ThoughtWorksInc/each
import
guidelinesBatch
, Layer
, Symbolic
, Poly
, XxxOps
train
, predict
, compose
,withOutputDataHook
import
guidelinesTape
, Poly
, XxxOps
,TapeTask
: train
, predict
Use TryT instead of EitherT in RAII.scala
remove RAIITask
ResourceFactoryT
ResourceFactoryTSpec
remove sde-raii & sde-raii-Spec
Shared
SharedSpec
See https://travis-ci.org/ThoughtWorksInc/DeepLearning.scala/builds/213277800 and e8654a9
The error may be disappear after we complete #14 . However, the cause of the error is still to be investigated.
下面可以编译通过且正确运行:
package testPackage
import com.thoughtworks.deeplearning.DifferentiableINDArray._
import com.thoughtworks.deeplearning.DifferentiableAny._
import com.thoughtworks.deeplearning.Lift._
import com.thoughtworks.deeplearning.Poly.MathFunctions._
import com.thoughtworks.deeplearning.Poly.MathOps
import org.nd4j.linalg.api.ndarray.INDArray
import org.nd4s.Implicits._
import shapeless._
object Bug extends App {
def layer(implicit x: From[INDArray] ##T): To[INDArray] ##T = {
val result: To[INDArray] ##T = exp(x).withOutputDataHook { x: INDArray =>
println(x)
}
result / result.sum(1)
}
layer.train(
Array(Array(1, 2, 3, 4),
Array(1, 2, 3, 4),
Array(1, 2, 3, 4),
Array(1, 2, 3, 4)).toNDArray)
}
把result
后面的类型去掉后会报错:
Error:(17, 21) value sum is not a member of com.thoughtworks.deeplearning.Layer.Aux[com.thoughtworks.deeplearning.Layer.Batch{type Data <: org.nd4j.linalg.api.ndarray.INDArray; type Delta >: org.nd4j.linalg.api.ndarray.INDArray},com.thoughtworks.deeplearning.Layer.Batch.Aux[this.Data,com.thoughtworks.deeplearning.DifferentiableINDArray.INDArrayPlaceholder.Delta]]
result / result.sum(1)
deeplearning
type Double = Do[_ <: Tape.Aux[scala.Double, scala.Double]]
/ type Double = Do[Tape.Aux[scala.Double, scala.Double]]
raii
do
tryt
remove CovariantT
remove EitherTNondeterminism
remove FreeTParallelApplicative
remvoe KleisliParallelApplicative
backward
, forceBackward
and rawBackward
seems too messy.
We should remove forceBackward
and isTrainable
from public API.
def toLayerTest(implicit from: From[Double]##T) = {
1.0.toLayer
Seq(1.0).toLayer
Seq(1.0.toLayer).toLayer
Seq(1.toLayer).toLayer
Seq(1).toLayer
}
error:
[error] DeepLearning.scala/src/test/scala/com/thoughtworks/deeplearning/SeqSpec.scala:38: value toLayer is not a member of Double
[error] 1.0.toLayer
[error] ^
[error] DeepLearning.scala/src/test/scala/com/thoughtworks/deeplearning/SeqSpec.scala:39: value toLayer is not a member of Seq[Double]
[error] Seq(1.0).toLayer
[error] ^
[error] DeepLearning.scala/src/test/scala/com/thoughtworks/deeplearning/SeqSpec.scala:40: value toLayer is not a member of Double
[error] Seq(1.0.toLayer).toLayer
[error] ^
[error] DeepLearning.scala/src/test/scala/com/thoughtworks/deeplearning/SeqSpec.scala:41: value toLayer is not a member of Int
[error] Seq(1.toLayer).toLayer
[error] ^
[error] DeepLearning.scala/src/test/scala/com/thoughtworks/deeplearning/SeqSpec.scala:42: value toLayer is not a member of Seq[Int]
[error] Seq(1).toLayer
[error] ^
[error] 5 errors found
At the moment, DslExpression
and DslType
only works for OpenCL. We should create abstract types in order to support other compute platform, e.g. CUDA
Implement DifferentiableFloat
Tape
ToTapeTask
TapeTaskFactory
TapeTask
differentiable-float
differentiable-float-Spec
DifferentiableKernel
DifferentiableKernelSpec
Momentum update
Nesterov Momentum
Adagrad
RMSprop
Adam
Update demo use DeepLearning.scala 2.0 milestone version
That should compile
implicit def optimizer: Optimizer = new LearningRate {
def currentLearningRate() = 1
}
val weight: Do[Borrowing[Tape.Aux[INDArray, INDArray]]] = (Nd4j.ones(4, 4) * 10).toWeight
def myNetwork(input: INDArray) : Do[Borrowing[Tape.Aux[INDArray, INDArray]]] = {
abs(weight)
}
but actually not:
Error:(429, 11) type mismatch;
found : weight.type (with underlying type com.thoughtworks.raii.asynchronous.Do[com.thoughtworks.raii.ownership.Borrowing[com.thoughtworks.deeplearning.Tape.Aux[org.nd4j.linalg.api.ndarray.INDArray,org.nd4j.linalg.api.ndarray.INDArray]]])
required: com.thoughtworks.deeplearning.PolyFunctions.abs.ProductCase.Aux[shapeless.HNil,?]
(which expands to) shapeless.poly.Case[com.thoughtworks.deeplearning.PolyFunctions.abs.type,shapeless.HNil]{type Result = ?}
abs(weight)
a workaround:
implicit def optimizer: Optimizer = new LearningRate {
def currentLearningRate() = 1
}
val weight: Do[Borrowing[Tape.Aux[INDArray, INDArray]]] = (Nd4j.ones(4, 4) * 10).toWeight
def myNetwork(input: INDArray) = {
abs(weight)
}
or :
implicit def optimizer: Optimizer = new LearningRate {
def currentLearningRate() = 1
}
val weight: Do[Borrowing[Tape.Aux[INDArray, INDArray]]] = (Nd4j.ones(4, 4) * 10).toWeight
def myNetwork(input: INDArray) : Do[Borrowing[Tape.Aux[INDArray, INDArray]]] = {
val result = abs(weight)
result
}
possible solution:
add
def abs(a: AnyRef)(implicit c: abs.Case[a.type]): c.Result = c[a.type](a)
to PolyFunctions
, and make Do
be a type Do[A] <:AnyRef
Since we have releaseMap
and releaseFlatMap
, Do
should be enough for resource management
For what you are implementing for deep learning with a bit more flexibility perhaps you can make this a such that it can also be used to code application logic in LA / Data Flow / Reactive paradigms. Is it possible to give this flexibility?
Auto generate differentiable-double
PendingBuffer
should borrow OpenCL.Buffer
and OpenCL.Event
Weight
should store OpenCL.Buffer
and OpenCL.Event
, not PendingBuffer
Weight.data
should duplicate OpenCL.Buffer
and OpenCL.Event
So that we can avoid too many try
/finally
statements when constructing a complicated layer.
So we can avoid Aux
So we can avoid Do[_ <: ...]
everywhere
We are implementing asynchronous computing in DeepLearning.scala 2.0.
However, in order to maximize the throughput, we need on-device computing graph instead of CPU driven asynchronous computing.
In DeepLearning.scala 3.0, we will implement applicative-based computing graph, avoiding flatMap
or map
. We will keep a proper number of kernel for an on-device command queue, e.g. 3 kernels. Most of the on-CPU Futures await for command queue available, instead of awaiting for result.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.