Giter VIP home page Giter VIP logo

polynote's Introduction

polynote

Gitter chat Build status

Polynote is an experimental polyglot notebook environment. Currently, it supports Scala and Python (with or without Spark), SQL, and Vega.

For more information, see Polynote's website

Why?

Current notebook solutions, like Jupyter and Zeppelin, are lacking in some fundamental features:

  • Code editing – the code editing capabilities in most notebook tools leave plenty to be desired. Why can't a notebook tool have modern editing capabilities like those you'd find in an IDE? Polynote provides useful autocomplete, parameter hints, and more – we're planning to add even more features, like jump-to-definition.
  • Text editing – you can use the WYSIWYG editor for composing text cells, so you'll know what the text will look like as you're writing. TeX equations are also supported.
  • Multi-language support – Polynote allows you to mix multiple languages in one notebook, while sharing definitions seamlessly between them.
  • Runtime insight – Polynote tries to keep you informed of what's going on at runtime:
    • The tasks area shows you what the kernel is doing at any given time.
    • The symbol table shows you what variables and functions you've defined, so you don't have to scroll around to remind yourself.
    • Compile failures and runtime exceptions are highlighted in the editor (for supported languages), so you can see exactly what's going wrong.

polynote's People

Contributors

agilelab-tmnd1991 avatar baoqi avatar calmarj avatar dependabot[bot] avatar dhia-gharsallaoui avatar easel avatar echohlne avatar ghoto avatar gliptak avatar hadrienk avatar holdenk avatar indig avatar jbguerraz avatar jd557 avatar jeremyrsmith avatar jonathanindig avatar kangjung avatar kuckjwi avatar kuckjwi0928 avatar lanking520 avatar masseguillaume avatar mathematicalmichael avatar mehdi-bendriss avatar mjren23 avatar mt40 avatar omidmogasemi avatar sobolevn avatar wsargent avatar zero323 avatar zzeekk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

polynote's Issues

Kernel Errors cause SocketSession to hang

I'm not sure how, but it looks like whenever we generate a Kernel Error we somehow lose connectivity between the UI and the backend.

Reproduce:

  • start a fresh polynote
  • generate a Kernel Error
    • one easy way to do this is to try to make a new notebook with the same name as one that already exists.
  • try to do other stuff like running cells
    • it won't work!
  • reload the page
  • try to do stuff again
    • it works!

Undo button

I accidentally deleted a cell and I want it back 😢

Document Quick Start Guide for alpha users

We already have some stuff in the readme but we might want to add some more docs on usage, UI, etc.

We probably want to sit down with our first alpha user and see how they interact with Polynote and what questions they ask and turn that into a FAQ / guide of some sort.

`FreshRunReq` the second time a Python cell is run

In a notebook, the second time a Python cell is run we get the following error:

Exception in thread "pool-3-thread-1" java.lang.Error: scala.tools.nsc.interactive.FreshRunReq
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1148)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)
Caused by: scala.tools.nsc.interactive.FreshRunReq

I believe this is due to https://github.com/polynote/polynote/blob/master/polynote-kernel/src/main/scala/polynote/kernel/lang/python/PythonInterpreter.scala#L150-L152

Unfortunately, our unit tests don't catch this because they generate a new global for each run rather than reusing it.

If you comment out those lines, you run into

type String in scala.Predef not found
scala.ScalaReflectionException: type String in scala.Predef not found
	at scala.reflect.internal.ReificationSupport$ReificationSupportImpl.select(ReificationSupport.scala:25)
	at scala.reflect.internal.ReificationSupport$ReificationSupportImpl.selectType(ReificationSupport.scala:13)
	at scala.reflect.internal.ReificationSupport$ReificationSupportImpl.selectType(ReificationSupport.scala:11)
	at polynote.kernel.lang.python.PythonInterpreter$$typecreator1$1.apply(PythonInterpreter.scala:224)

Set up CI

We should have something that automatically runs tests, ideally on PRs. There's probably a lot of options out there I guess, any ideas?

Better validation for new notebook creation

I think we should clarify that the new notebook creation dialog asks for a notebook name, not necessarily a full path.

If users do provide a path (e.g., including directories), we won't create the intermediate dirs -- should we?

Server config file

We need some server configuration, which should go in a (preferably YAML) file. This would contain server-specific configuration (port, notebook path(s), etc) as well as default notebook config for new notebooks.

Make sidebars collapsible

Currently the sidebars (Kernel and Notebooks) are ever-present. You can resize them, but you can't "collapse" them – and you probably don't need the Notebooks panel most of the time. We should make them collapsible.

Error when calling `globals()` or `locals()` in Python

Uncaught exception<class 'RuntimeError'>: wrapping java exception in pyjobject failed. (jep.JepException)
jep.Jep.getValue(Jep.java:-2)
jep.Jep.getValue(Jep.java:646)
polynote.kernel.lang.python.PythonInterpreter$$anonfun$runCode$1.apply(PythonInterpreter.scala:169)
polynote.kernel.lang.python.PythonInterpreter$$anonfun$runCode$1.apply(PythonInterpreter.scala:122)
cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:85)
cats.effect.internals.IORunLoop$.startCancelable(IORunLoop.scala:41)
cats.effect.internals.IOBracket$BracketStart.run(IOBracket.scala:74)
cats.effect.internals.Trampoline.cats$effect$internals$Trampoline$$immediateLoop(Trampoline.scala:70)
cats.effect.internals.Trampoline.startLoop(Trampoline.scala:36)
cats.effect.internals.TrampolineEC$JVMTrampoline.cats$effect$internals$TrampolineEC$JVMTrampoline$$super$startLoop(TrampolineEC.scala:93)
cats.effect.internals.TrampolineEC$JVMTrampoline$$anonfun$startLoop$1.apply$mcV$sp(TrampolineEC.scala:93)
cats.effect.internals.TrampolineEC$JVMTrampoline$$anonfun$startLoop$1.apply(TrampolineEC.scala:93)
cats.effect.internals.TrampolineEC$JVMTrampoline$$anonfun$startLoop$1.apply(TrampolineEC.scala:93)
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
cats.effect.internals.TrampolineEC$JVMTrampoline.startLoop(TrampolineEC.scala:92)
cats.effect.internals.Trampoline.execute(Trampoline.scala:43)
cats.effect.internals.TrampolineEC.execute(TrampolineEC.scala:44)
cats.effect.internals.IOBracket$BracketStart.apply(IOBracket.scala:60)
cats.effect.internals.IOBracket$BracketStart.apply(IOBracket.scala:41)
cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:134)
cats.effect.internals.IORunLoop$RestartCallback.signal(IORunLoop.scala:336)
cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:357)
cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:303)
cats.effect.internals.IOShift$Tick.run(IOShift.scala:36)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:748)

duplicate out on notebook load

  1. Load notebook
  2. Run a cell that returns a value (i.e., ends with an expression such as "hello")
  3. You will get an Out result (e.g., Out | "hello")
  4. reload page
  5. There should be multiple copies of the Out value.

One of the Out values comes from loading the notebook, and the other one comes from updating the Runtime Symbol Table.

Show stats and more info in the UI

We've had various ideas floating around regarding the kind of stats data we can display in the UI.

Here are some of the ideas we've been thinking of:

  • Little sparklines underneath a cell while it's running, showing CPU/Mem, maybe Spark info, etc.
  • Show number of Spark executors somewhere (maybe in the sparklines, maybe in the Info panel)

Feel free to add more to this issue!

`kernel.display` doesn't work in Python

Since it's not a field of the class - it's a method (if anything).

Might have to make the Runtime a Java class rather than Scala, so it can have a field called display?

Scary errors upon completion

These completion errors are benign but they look scary and should be suppressed.
example:

java.lang.RuntimeException: No typed tree at position
	at polynote.kernel.lang.scal.ScalaSource$$anonfun$completionsAt$1$$anonfun$apply$27.apply(ScalaSource.scala:323)
	at polynote.kernel.lang.scal.ScalaSource$$anonfun$completionsAt$1$$anonfun$apply$27.apply(ScalaSource.scala:323)
...

SQL interpreter catalog completion failure

SQL interpreter failed to complete using the Spark catalog in the demo today. I was able to reproduce by running a cell with

spark.catalog.listDatabases().map(_.name).collect()

Error:

Job aborted due to stage failure: Task 6 in stage 9.0 failed 4 times, most recent failure: Lost task 6.3 in stage 9.0 (TID 97, 100.85.124.45, executor 3): java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2287) at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1417) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2293) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431) at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:479) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431) at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75) at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80) at org.apache.spark.scheduler.Task.run(Task.scala:100) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:353) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Driver stacktrace: (org.apache.spark.SparkException)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1455)
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1443)
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1442)
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1442)
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
scala.Option.foreach(Option.scala:257)
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1670)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1625)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1614)
org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
org.apache.spark.SparkContext.runJob(SparkContext.scala:1930)
org.apache.spark.SparkContext.runJob(SparkContext.scala:1943)
org.apache.spark.SparkContext.runJob(SparkContext.scala:1956)
org.apache.spark.SparkContext.runJob(SparkContext.scala:1970)
org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:941)
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
org.apache.spark.rdd.RDD.withScope(RDD.scala:367)
org.apache.spark.rdd.RDD.collect(RDD.scala:940)
org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:275)
org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2390)
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75)
org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2799)
org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2389)
org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$collect$1.apply(Dataset.scala:2394)
org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$collect$1.apply(Dataset.scala:2394)
org.apache.spark.sql.Dataset.withCallback(Dataset.scala:2812)
org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2394)
org.apache.spark.sql.Dataset.collect(Dataset.scala:2370)
$notebook1.Eval$Cell1$3$.<init>(Cell1:2)
$notebook1.Eval$Cell1$3$.<clinit>(Cell1:-1)
java.lang.Class.forName0(Class.java:-2)
java.lang.Class.forName(Class.java:348)
scala.reflect.runtime.ReflectionUtils$.staticSingletonInstance(ReflectionUtils.scala:64)
scala.reflect.runtime.JavaMirrors$JavaMirror$JavaModuleMirror.instance(JavaMirrors.scala:528)
polynote.kernel.lang.scal.ScalaInterpreter$$anonfun$runCode$1$$anonfun$apply$7$$anonfun$apply$8$$anonfun$4.apply(ScalaInterpreter.scala:92)
polynote.kernel.lang.scal.ScalaInterpreter$$anonfun$runCode$1$$anonfun$apply$7$$anonfun$apply$8$$anonfun$4.apply(ScalaInterpreter.scala:84)
cats.effect.IO$Map.apply(IO.scala:1412)
cats.effect.IO$Map.apply(IO.scala:1408)
cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:139)
cats.effect.internals.IORunLoop$.startCancelable(IORunLoop.scala:41)
cats.effect.internals.IOBracket$BracketStart.run(IOBracket.scala:86)
cats.effect.internals.Trampoline.cats$effect$internals$Trampoline$$immediateLoop(Trampoline.scala:70)
cats.effect.internals.Trampoline.startLoop(Trampoline.scala:36)
cats.effect.internals.TrampolineEC$JVMTrampoline.cats$effect$internals$TrampolineEC$JVMTrampoline$$super$startLoop(TrampolineEC.scala:93)
cats.effect.internals.TrampolineEC$JVMTrampoline$$anonfun$startLoop$1.apply$mcV$sp(TrampolineEC.scala:93)
cats.effect.internals.TrampolineEC$JVMTrampoline$$anonfun$startLoop$1.apply(TrampolineEC.scala:93)
cats.effect.internals.TrampolineEC$JVMTrampoline$$anonfun$startLoop$1.apply(TrampolineEC.scala:93)
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
cats.effect.internals.TrampolineEC$JVMTrampoline.startLoop(TrampolineEC.scala:92)
cats.effect.internals.Trampoline.execute(Trampoline.scala:43)
cats.effect.internals.TrampolineEC.execute(TrampolineEC.scala:44)
cats.effect.internals.IOBracket$BracketStart.apply(IOBracket.scala:72)
cats.effect.internals.IOBracket$BracketStart.apply(IOBracket.scala:52)
cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:136)
cats.effect.internals.IORunLoop$.start(IORunLoop.scala:34)
cats.effect.internals.IOBracket$$anonfun$apply$1.apply(IOBracket.scala:44)
cats.effect.internals.IOBracket$$anonfun$apply$1.apply(IOBracket.scala:34)
cats.effect.internals.IORunLoop$RestartCallback.start(IORunLoop.scala:337)
cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:119)
cats.effect.internals.IORunLoop$RestartCallback.signal(IORunLoop.scala:351)
cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:372)
cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:312)
cats.effect.internals.IOShift$Tick.run(IOShift.scala:36)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)

Arg parsing doesn't work while running under sparksubmit

@faisalzs ran into

java.lang.IllegalArgumentException: Unknown argument polynote.server.SparkServer
    at polynote.server.Server$.polynote$server$Server$$parseArgs(Server.scala:118)
    at polynote.server.Server$class.parseArgs(Server.scala:103)
    at polynote.server.SparkServer$.parseArgs(SparkServer.scala:13)
    at polynote.server.Server$class.run(Server.scala:82)
    at polynote.server.SparkServer$.run(SparkServer.scala:13)
    at cats.effect.IOApp$$anonfun$main$3.apply(IOApp.scala:67)
    at cats.effect.IOApp$$anonfun$main$3.apply(IOApp.scala:67)
    at cats.effect.internals.IOAppPlatform$.mainFiber(IOAppPlatform.scala:36)
    at cats.effect.internals.IOAppPlatform$.main(IOAppPlatform.scala:24)
    at cats.effect.IOApp$class.main(IOApp.scala:67)
    at polynote.server.SparkServer$.main(SparkServer.scala:13)
    at polynote.server.SparkServer.main(SparkServer.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

while running on his remote box under spark-submit.

Exception if the first cell to run in a session is Python

If a Python command is the first to be run, we get a

Uncaught exception<class 'NameError'>: name '__polynote_last__' is not defined (jep.JepException)

exception. After running any Scala cell running Python works.

I think __polynote_last__ is undefined because jep.eval sometimes doesn't actually run your code if it's in interactive mode (http://ninia.github.io/jep/javadoc/3.8/). Setting interactive mode to false seems to resolve this - perhaps we don't need it?

Cannot complete tree at given position

Seeing a lot of these errors in the logs:

[ForkJoinPool-1-worker-13] ERROR polynote.server.SocketSession - RuntimeException: Cannot complete tree at given position
polynote.kernel.RuntimeError: RuntimeException: Cannot complete tree at given position
	at polynote.server.SocketSession$$anonfun$toResponse$1$$anonfun$4.apply(SocketSession.scala:82)
	at polynote.server.SocketSession$$anonfun$toResponse$1$$anonfun$4.apply(SocketSession.scala:79)
...
Caused by: java.lang.RuntimeException: Cannot complete tree at given position
	at polynote.kernel.lang.scal.ScalaSource.polynote$kernel$lang$scal$ScalaSource$$completionResults$1(ScalaSource.scala:315)
	at polynote.kernel.lang.scal.ScalaSource$$anonfun$completionsAt$1$$anonfun$apply$28.apply(ScalaSource.scala:321)
	at polynote.kernel.lang.scal.ScalaSource$$anonfun$completionsAt$1$$anonfun$apply$28.apply(ScalaSource.scala:320)
	at cats.syntax.EitherOps$.flatMap$extension(either.scala:149)
	at polynote.kernel.lang.scal.ScalaSource$$anonfun$completionsAt$1.apply(ScalaSource.scala:320)
	at polynote.kernel.lang.scal.ScalaSource$$anonfun$completionsAt$1.apply(ScalaSource.scala:319)
	at cats.syntax.EitherOps$.flatMap$extension(either.scala:149)
	at polynote.kernel.lang.scal.ScalaSource.completionsAt(ScalaSource.scala:319)
	at polynote.kernel.lang.scal.ScalaInterpreter.completionsAt(ScalaInterpreter.scala:223)
	at polynote.kernel.lang.scal.ScalaInterpreter.completionsAt(ScalaInterpreter.scala:23)
	at polynote.kernel.PolyKernel$$anonfun$completionsAt$1.apply(PolyKernel.scala:154)
	at polynote.kernel.PolyKernel$$anonfun$completionsAt$1.apply(PolyKernel.scala:152)
	at polynote.kernel.PolyKernel$$anonfun$withKernel$1$$anonfun$apply$17$$anonfun$apply$18.apply(PolyKernel.scala:105)
	at polynote.kernel.PolyKernel$$anonfun$withKernel$1$$anonfun$apply$17$$anonfun$apply$18.apply(PolyKernel.scala:104)
	at cats.effect.IO$Map.apply(IO.scala:1407)
	at cats.effect.IO$Map.apply(IO.scala:1403)

I'm not doing anything crazy, just typing out val x = 100 seems to do it.

Somehow Python result can stop imports from working

Bit baffled by this one. Running a Python cell (not all Python cells though) before an import can cause a weird error message on the import.

See the following ipynb file for an example:

{
  "metadata" : {
    
  },
  "nbformat" : 4,
  "nbformat_minor" : 0,
  "cells" : [
    {
      "cell_type" : "markdown",
      "execution_count" : 0,
      "language" : "text",
      "source" : [
        "# test imports 2\n",
        "\n",
        "This is a text cell. Start editing!"
      ],
      "outputs" : [
      ]
    },
    {
      "cell_type" : "code",
      "execution_count" : 3,
      "language" : "scala",
      "source" : [
        "import java.net.URL"
      ],
      "outputs" : [
      ]
    },
    {
      "cell_type" : "code",
      "execution_count" : 1,
      "language" : "python",
      "source" : [
        "[1, 2, 3, 4]"
      ],
      "outputs" : [
        {
          "execution_count" : 1,
          "data" : {
            "text/plain" : [
            ]
          },
          "output_type" : "execute_result"
        },
        {
          "execution_count" : 1,
          "data" : {
            "text/plain" : [
              "[1, 2, 3, 4]"
            ]
          },
          "output_type" : "execute_result"
        }
      ]
    },
    {
      "cell_type" : "code",
      "execution_count" : 2,
      "language" : "scala",
      "source" : [
        "import java.net.URL"
      ],
      "outputs" : [
        {
          "execution_count" : 2,
          "data" : {
            "application/json" : [
              {
                "pos" : {
                  "sourceId" : "Cell2",
                  "start" : 0,
                  "end" : 0,
                  "point" : 0
                },
                "msg" : "kinds of the type arguments (collection.immutable.::) do not conform to the expected kinds of the type parameters (type T0).\ncollection.immutable.::'s type parameters do not match type T0's expected parameters:\nclass :: has one type parameter, but type T0 has none",
                "severity" : 2
              }
            ],
            "text/plain" : [
              "Error: kinds of the type arguments (collection.immutable.::) do not conform to the expected kinds of the type parameters (type T0).\ncollection.immutable.::'s type parameters do not match type T0's expected parameters:\nclass :: has one type parameter, but type T0 has none (0)"
            ]
          },
          "metadata" : {
            "rel" : "compiler_errors"
          },
          "output_type" : "execute_result"
        }
      ]
    }
  ]
}

Fix global/local version coherence

The algorithm used to implement the collaborative editing in an ackless way is flawed. We whiteboarded it and realized that there are cases where it will become inconsistent.

This needs to be fixed. Current idea is that client shouldn't care about global versions at all, and server is responsible for mapping global versions to local versions. But some thought will have to be put into how it works (or maybe the current mechanism should be scrapped in favor of implementing straightforward vector clocks or something)

Dependency management feature completeness + bug fixes

I think we are missing a few features:

  • we probably want some default dependencies to be configurable (related to #10)
  • we want to be able to add exclusions

Stretch:

  • it would be super-cool to be able to use the uber-jar thing that Jayadev's working on but that's not necessary for an alpha release.
  • would be cool to add Python dependencies (do we need a virtualenv to be present?)

EDIT: move python deps to stretch

ipynb compatibility for ipython notebook repo

  1. Currently when I try to open a Polynote notebook in Jupyter I get Unreadable Notebook: /Users/jindig/projects/polynote/notebooks/test notebook.ipynb AttributeError('metadata')

  2. Going forward we may want to have some tests that verify our compliance with the standard if that's our goal (how strict is that goal?)

Unable to define vars

When attempting to compile a Scala Cell with a var definition we get:

Uncaught exception<none> is not a method (scala.ScalaReflectionException)
scala.reflect.api.Symbols$SymbolApi$class.asMethod(Symbols.scala:228)
scala.reflect.internal.Symbols$SymbolContextApiImpl.asMethod(Symbols.scala:84)
polynote.kernel.lang.scal.ScalaInterpreter$$anonfun$runCode$1$$anonfun$apply$5$$anonfun$4$$anonfun$2.applyOrElse(ScalaInterpreter.scala:150)
polynote.kernel.lang.scal.ScalaInterpreter$$anonfun$runCode$1$$anonfun$apply$5$$anonfun$4$$anonfun$2.applyOrElse(ScalaInterpreter.scala:127)
scala.collection.immutable.List.collect(List.scala:314)
polynote.kernel.lang.scal.ScalaInterpreter$$anonfun$runCode$1$$anonfun$apply$5$$anonfun$4.apply(ScalaInterpreter.scala:127)
polynote.kernel.lang.scal.ScalaInterpreter$$anonfun$runCode$1$$anonfun$apply$5$$anonfun$4.apply(ScalaInterpreter.scala:108)
cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:137)
cats.effect.internals.IORunLoop$RestartCallback.signal(IORunLoop.scala:336)
cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:357)
cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:303)
cats.effect.internals.IOShift$Tick.run(IOShift.scala:36)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:748)

Creation of `kernel: Runtime` reference

Right now every Kernel/Interpreter (aside: is there a difference?) has its own predef, and the Scala predef is used to initialize the kernel Runtime reference.

This means that you won't have access to kernel in Python until Scala code is run at least once.

I think our options are:

  • Scala is a special, and its Predef is always run when a notebook is loaded
  • We move the kernel reference to the Symbol table
  • Each interpreter creates its own kernel reference

Switch rich text editor

We should discard the current RTE, as it isn't very robust or featureful. Instead we should use a more fully-fledged open source RTE – I'm thinking probably ProseMirror.

This will probably require work on the server side, to use a more compatible model (currently both code and text are assumed to be plaintext, which is stored in a rope. That would have to change.)

Fix sidebar resize

Sometimes resizing the sidebar doesn't work properly. When the mouse is let go the sidebar jumps to the edge of the screen.

Reproduce by trying to make the left sidebar a little smaller.

Variables assigned to Python objects result in Strings

Related to ninia/jep#43

Cell1:

import datetime
datetime.datetime.now()

Result: resCell1: PyObject

Cell2:

d = datetime.datetime.now()

Result: d in the Symbol Table is a String

Looks like a pretty simple fix, just need the jep.getValue(name, classOf[PyObject]) we do for the results to be done for all new vars declared in the cell.

Error when running a comment-only Python cell

Uncaught exception<class 'IndexError'>: list index out of range (jep.JepException)
<string>.<module>(<string>:1)
jep.Jep.eval(Jep.java:-2)
jep.Jep.eval(Jep.java:507)
polynote.kernel.lang.python.PythonInterpreter$$anonfun$runCode$1$$anonfun$apply$4.apply(PythonInterpreter.scala:173)
polynote.kernel.lang.python.PythonInterpreter$$anonfun$runCode$1$$anonfun$apply$4.apply(PythonInterpreter.scala:159)
cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:87)
cats.effect.internals.IORunLoop$.startCancelable(IORunLoop.scala:41)
cats.effect.internals.IOBracket$BracketStart.run(IOBracket.scala:86)
cats.effect.internals.Trampoline.cats$effect$internals$Trampoline$$immediateLoop(Trampoline.scala:70)
cats.effect.internals.Trampoline.startLoop(Trampoline.scala:36)
cats.effect.internals.TrampolineEC$JVMTrampoline.cats$effect$internals$TrampolineEC$JVMTrampoline$$super$startLoop(TrampolineEC.scala:93)
cats.effect.internals.TrampolineEC$JVMTrampoline$$anonfun$startLoop$1.apply$mcV$sp(TrampolineEC.scala:93)
cats.effect.internals.TrampolineEC$JVMTrampoline$$anonfun$startLoop$1.apply(TrampolineEC.scala:93)
cats.effect.internals.TrampolineEC$JVMTrampoline$$anonfun$startLoop$1.apply(TrampolineEC.scala:93)
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
cats.effect.internals.TrampolineEC$JVMTrampoline.startLoop(TrampolineEC.scala:92)
cats.effect.internals.Trampoline.execute(Trampoline.scala:43)
cats.effect.internals.TrampolineEC.execute(TrampolineEC.scala:44)
cats.effect.internals.IOBracket$BracketStart.apply(IOBracket.scala:72)
cats.effect.internals.IOBracket$BracketStart.apply(IOBracket.scala:52)
cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:136)
cats.effect.internals.IORunLoop$RestartCallback.signal(IORunLoop.scala:351)
cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:372)
cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:312)
cats.effect.internals.IOShift$Tick.run(IOShift.scala:36)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:748)

UI Refactor

We know we need to do some refactoring to the UI. We can use this issue to keep track of what should be cleaned up so that when we get around to it we can address a bunch of them at once.

  • We should have tests for the UI
  • UI.js is pretty big.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.