Comments (31)
maybe just ->milliseconds
.
from tech.datatype.
I have your first few examples working now:
tech.ml.dataset.datetime-test> (nth (ds/column
(ds/->dataset {:dt [(java.time.LocalDateTime/of 2020 01 01 11 22 33)
(java.time.LocalDateTime/of 2020 10 01 01 01 01)]})
:dt) 0)
#object[java.time.LocalDateTime 0x67283ea7 "2020-01-01T11:22:33"]
tech.ml.dataset.datetime-test> (dtype/get-value
(ds/column
(ds/->dataset {:dt [(java.time.LocalDateTime/of 2020 01 01 11 22 33)
(java.time.LocalDateTime/of 2020 10 01 01 01 01)]})
:dt) 0)
#object[java.time.LocalDateTime 0x1bff2435 "2020-01-01T11:22:33"]
from tech.datatype.
implementation question: why packed datetime types are not stored as milliseconds from epoch? It would enable easy lerping between dates (for missing values for example).
This is what I'm trying to do now.
from tech.datatype.
Here is the problem. Argument for unpacking is missing https://github.com/techascent/tech.datatype/blob/master/src/tech/v2/datatype/datetime.clj#L706
from tech.datatype.
OK, that should work. Also, calling update-column with dtype-dt/unpack
will work and make get-value return what you expect. tech.v2.datatype.datetime.operations/get-milliseconds-since-epoch
is a pathway that will work with either packed or unpacked datatypes.
Agreed that packed local date time should just be milliseconds since epoch. Same with packed instant.
from tech.datatype.
Careful, I'm not sure all these operations make sense for LocalDateTime
, in particular.
LocalDateTime
has no time zone, and so represents no specific instant (as ms since epoch implies). So, in order to get an instant from a LocalDateTime, you'd at least need to specify some offset or time zone.
Also, LocalDateTime
can measure time down to nanoseconds.
from tech.datatype.
Glad to have help here :-).
My thinking is that things with no offset have implied offset of zero. This unifies localdateX objects with instant, zonedX and offsetX objects.
While localdatetime and instants can represent nanoseconds i believe the use cases for this are very rare.
One question i have is what is the total range of milliseconds since epoch when represented by a long integer.
Another is what is the actual space savings in memory of 1000 long integers in a primitive array as compared to 1000 localdatetimes and 1000 instants.
from tech.datatype.
👍
from tech.datatype.
My thinking is that things with no offset have implied offset of zero
I think the same here.
Regarding nanoseconds I don't see use case now. But I can imagine dataset (timeseries) with nanoseconds granularity. For example High Density Tradings data or some physics data.
from tech.datatype.
So, here are the memory measurements:
user> (require '[tech.v2.datatype.datetime :as dtype-dt])
nil
user> (def ldt (into-array (repeatedly 1000 #(dtype-dt/local-date-time))))
#'user/ldt
user> (def inst (into-array (repeatedly 1000 #(dtype-dt/instant))))
#'user/inst
user> (def lng (long-array 1000))
#'user/lng
user> (require '[clj-memory-meter.core :as mm])
nil
user> (mm/measure ldt)
"74.2 KB"
user> (mm/measure inst)
"27.4 KB"
user> (mm/measure lng)
"7.8 KB"
user> (dtype-dt/instant)
#object[java.time.Instant 0x37c112b3 "2020-06-28T20:44:21.581Z"]
One thing I have wanted to do already that is actually pretty hard is to take a column of local-date objects and add a zoneid to them (and an optional offset from 12 midnight) to get an instant or a zoned date time.
The example that is nontrivial is my stocks example. For reference:
user> (ds/head 10 (ds/->dataset "test/data/stocks.csv"))
test/data/stocks.csv [10 3]:
| symbol | date | price |
|--------|------------|------:|
| MSFT | 2000-01-01 | 39.81 |
| MSFT | 2000-02-01 | 36.35 |
| MSFT | 2000-03-01 | 43.22 |
| MSFT | 2000-04-01 | 28.37 |
| MSFT | 2000-05-01 | 25.45 |
| MSFT | 2000-06-01 | 32.54 |
| MSFT | 2000-07-01 | 28.40 |
| MSFT | 2000-08-01 | 28.40 |
| MSFT | 2000-09-01 | 24.53 |
| MSFT | 2000-10-01 | 28.02 |
Let's say these are closing quotes. They are US east coast time regardless.
That means they are taken at 5PM each day in the east coast timezone which has a different offset from UTC depending on what time of year it is. So you can't just add an offset. What I want is a transformation to zoneddatetimes with the correct timezone and year,month,day matching above and 17 be the hour, everything else is zeros.
There is a function that takes a timezone zoneid and a millisecond offset and returns a reader with those zoned date times look like the localdates above.
It is harder than one might think; you have to almost work backwards as the display shouldn't change much but that means they have a far more complex translation to epoch-milliseconds. So that is one thing.
The second thing was how many milliseconds/years can be represented in a long integer. Turns out to be hundreds of millions.
user> (long (/ Long/MAX_VALUE (* (dtype-dt/milliseconds-in-day) 365)))
292471208
We could use microseconds as opposed to milliseconds and still be far longer than recorded history and have one better resolution. Microseconds are harder to reason about than milliseconds so that is a thing but we hide a lot of that with the operator namespace functions.
So I propose:
- packed-local-date-time, packed-instant both use microsecond long representation. At least then it is standard and known.
- Work out exactly how to do what I propose above with local-date and local-date-time - interpret this column as having come from this time zone (and the inverse). There is a way to do this somewhat efficiently using somewhat arcane java interfaces that I want to research.
from tech.datatype.
Leaving representation for a while:
Also, calling update-column with dtype-dt/unpack will work and make get-value return what you expect.
But then I have to check if value is packed or not packed and apply unpack
for every supported temporal class.
To rephrase my case: having datetime column (every possible type) - I want to get milliseconds from epoch for a particular value. I don't want to apply it for whole column, also I don't want to dispatch behaviour based on datetime representation (packed or not).
IMHO packed datetime is just internal representation used for optimization and shouldn't be exposed to the user. At least in the current implementation.
from tech.datatype.
That is fair and I think many users will have that expectation. Not sure exactly with the current implementation how to provide that without a ton of case statements.
Unpack should be safe to call on anything and only in the case of packed datatypes should it do anything; else it would be the identity operation. Ditto for pack; only for packable types is it anything other than identity. I changed that today. That is at least a decent work-around in a lot of cases. packed types aren't totally opaque, they are guaranteed to have natural ordering so <.<= == etc. work as expected but aside from that I agree they are opaque and most users, when interacting with them, would want to get the unpacked type.
One thing is that calling unpack doesn't convert the column. It returns a reader that when read is called unpacks the data inline.
get-epoch-milliseconds
also returns a reader that unpacks and then gets the desired value only when read. So both of these methods are cheap to call as they really just setup an equation that executes lazily as needed.
How often when working with an entire column do you need to get just the milliseconds from one index in the column? What algorithm in fact does this? There is a dispatch cost to doing that operation that we can amortize across a column but we can't amortize on a single object; unpacking columns means to some extent amortizing that cost while costing zero more storage.
So, I agree with your expectation but I am not certain how to make this really happen and it seems to me creating an ephemeral column or just an ephemeral reader that does the translation is very cheap while building smart readers specifically for packed types at least at the datatype level is quite hard unless I know more about the operation (like you intend to print the thing or something).
from tech.datatype.
One thing is that calling unpack doesn't convert the column. It returns a reader that when read is called unpacks the data inline.
Ooops... I don't know why I didn't think about it (again...). If unpack is safe for non-packed datetime types - it's perfectly ok!
I see that I mislead you a little bit in my latest comment and we discuss a lot of things here. epoch milliseconds are not enough, I work also with LocalTime.
I need to get continuous date/time/datetime representation to do linear interpolation. For dates it's milliseconds from epoch, for time/duration is number of milliseconds. I know my datatype so I feel safe to use both: dt/to-milliseconds
and dt/from-milliseconds
. But unfortunatelly: dt/to-milliseconds
fails due to bug I mentioned above (you do not pass the argument to the unpacking functions here and in all the following conversions: https://github.com/techascent/tech.datatype/blob/master/src/tech/v2/datatype/datetime.clj#L706.)
So when this one is fixed it will be enough for me to proceed.
(def ds (ds/->dataset {:dt [(java.time.LocalTime/of 11 22 33)
(java.time.LocalTime/of 01 01 01)]}))
(dtype/get-value (ds :dt) 0)
;; => 186024168
;; convert to continuous representation and back knowing the type
(-> (ds :dt)
(dtype/get-value 0)
(dt/unpack-local-time) ;; <- I should call dt/to-milliseconds here
(dt/local-time->milliseconds)
(dt/from-milliseconds :packed-local-time))
;; => 186024168
(def ds (ds/->dataset {:dt [(java.time.LocalDateTime/of 2010 11 22 11 22 33)
(java.time.LocalDateTime/of 2011 11 11 01 01 01)]}))
(dtype/get-value (ds :dt) 0)
;; => 565776892491628776
(-> (ds :dt)
(dtype/get-value 0)
(dt/unpack-local-date-time) ;; <- I should call dt/to-milliseconds here
(dt/local-date-time->milliseconds-since-epoch)
(dt/from-milliseconds :packed-local-date-time))
;; => 565776892491628776
from tech.datatype.
I will fix line 706 that you pointed out for sure.
to-continuous is useful; it would be useful for millisecond descriptive stats as well as from-continuous that just took a datatype.
user> (require '[tech.v2.datatype :as dtype])
nil
user> (require '[tech.v2.datatype.datetime :as dtype-dt])
nil
user> (require '[tech.v2.datatype.datetime.operations :as dtype-dt-ops])
nil
user> (require '[tech.v2.datatype.argtypes :as argtypes])
nil
user> (defn to-continuous-representation
"Convert datetime types to either milliseconds or milliseconds-since-epoch"
[data]
(let [datatype (dtype/get-datatype data)]
(when-not (dtype-dt/datetime-datatype? datatype)
(throw (Exception. (format "Invalid datatype for datetime operation: %s"
datatype))))
(if (dtype-dt/millis-datatypes datatype)
(dtype-dt-ops/get-milliseconds data)
(dtype-dt-ops/get-epoch-milliseconds data))))
#'user/to-continuous-representation
user> (def test-data (into-array (repeatedly 5 dtype-dt/local-date)))
#'user/test-data
user> (def test-ld (into-array (repeatedly 5 dtype-dt/local-date)))
#'user/test-ld
user> (def test-dur (into-array (repeatedly 5 dtype-dt/duration)))
#'user/test-dur
user> (to-continuous-representation test-ld)
[1593388800000 1593388800000 1593388800000 1593388800000 1593388800000]
user> test-ld
[#object[java.time.LocalDate 0x44ce27a "2020-06-29"],
#object[java.time.LocalDate 0x34e0e63c "2020-06-29"],
#object[java.time.LocalDate 0x6d464303 "2020-06-29"],
#object[java.time.LocalDate 0x87dee20 "2020-06-29"],
#object[java.time.LocalDate 0x4674d423 "2020-06-29"]]
user> (to-continuous-representation test-ld)
[1593388800000 1593388800000 1593388800000 1593388800000 1593388800000]
user> (dtype/get-datatype *1)
:epoch-milliseconds
user> (def test-dur (into-array (repeatedly 5 dtype-dt/duration)))
#'user/test-dur
user> (to-continuous-representation test-dur)
[0 0 0 0 0]
user> (dtype/get-datatype *1)
:int64
user> ((to-continuous-representation test-dur) 0)
0
user> ((to-continuous-representation test-ld) 0)
1593388800000
user> (type *1)
java.lang.Long
user> ((to-continuous-representation (dtype-dt/pack test-ld)) 0)
1593388800000
user> ((to-continuous-representation (dtype-dt/pack test-dur)) 0)
0
from tech.datatype.
That will work now. Making get-value
return an unpacked-object is also an option (like, put unpack
in the get-value call).
The above pathway will be more efficient in I think every case than an updated get-value call.
But I bet a lot of people will use get-value
and be surprised.
Also, now we are really getting into it. The epoch-millliseconds will print as instants when in a printable type (like a column or a tensor. They are an alias datatype aliased to int64. This is mostly how packed-X datatypes are implemented and why, for instance, sort is efficient on them but you are also seeing holes in that abstraction..
from tech.datatype.
Hmmm, in that sense actually unsigned types are almost aliases or rather I can imagine a type system that can build unsigned types, packed types, and unit datatypes (like epoch-milliseconds) with similar rules such as 'use primitive datatype X as the backing store'.
Extending that to 'use struct X as backing store' and you get complex numbers. That is a stretch for sure on the jvm but that would be the pathway. Then for instance the library would know how to efficiently copy and manipulate blocks of struct data and complex numbers by extension.
from tech.datatype.
unsigned numbers are interesting because they have a storage datatype and an operation-datatype. Maybe complex numbers would in addition - stored in packed data, operated on as some Complex64 class until the JVM gets proper structs.
from tech.datatype.
Wow, and pulling further on that thread that would also lend a better abstraction to things like packed-durations as the operation datatype could be Duration except for comparison and sort operations.
Baby steps ... Typesystems are hard :-)
from tech.datatype.
Really great idea with this two levels: internal representation (for optimal storage and certain operations) and external manifestation like dates. Complex? maybe, but when you want to play with dynamic systems or fractals, best possible precision is a must.
from tech.datatype.
Lots of room up there for breaking the problem down better than I have done so far in datatype but I see this as defining a dynamic array based language for the jvm using clojure as the implementation language. Leaving the larger problem for a moment :-).
I took a crack at fixing parts of this issue:
Representation
- Instances are packed as microseconds-since-epoch. So their packed representation can operate on naively if you understand that that is what they are.
Continuous/Millisecond Representation:
In the tech.v2.datatype.datetime.operations namespace:
local-date->milliseconds-since-epoch
- Vectorized version of datetime function of same name. Optional arguments indicate local-time and zone-id or zone-offset.local-date-time->milliseconds-since-epoch
- Vectorized version of datetime function of same name. Optional argument indicates zone-id.zoned-date-time->milliseconds-since-epoch
- Vectorized version of datetime function of same name. Optional argument indicates zone-id.milliseconds-since-epoch->local-date
- Vectorized version of datetime function of same name. Optional argument indicates zone-id.milliseconds-since-epoch->local-date-time
- Vectorized version of datetime function of same name. Optional argument indicates zone-id.milliseconds-since-epoch->zoned-date-time
- Vectorized version of datetime function of same name. Optional argument indicates zone-id.->milliseconds
vectorized operation to convert generic datetime objects to millisecond representation using defaults for any extra parameters.milliseconds->datetime
- vectorized operation to convert milliseconds to a specific datetime operation using defaults for any time or zone parameters.
Optimizations
Removed a lot of type-specific code by just using pack/unpack globally. This means get-years doesn't have to pack/unpack its data in the operator itself but rather unpack is called outside the loop by the parent function. The implication is removing large sets of operators that were packed-type-specific. Comparison operators still are necessary as these operate in packed space.
from tech.datatype.
With 5.01
, tentatively closing this issue.
from tech.datatype.
Chris is it possible to add LocalTime
with milliseconds from the midnight?
Also dt/to-milliseconds
still does not work due to not passing an argument.
(dto/->milliseconds (java.time.LocalTime/of 11 22 33))
;; => 0
(dt/to-milliseconds 568580556948144360 :packed-local-date-time)
;; Exception
1. Unhandled clojure.lang.ArityException
Wrong number of args (0) passed to:
tech.v2.datatype.datetime/unpack-local-date-time
AFn.java: 429 clojure.lang.AFn/throwArity
AFn.java: 28 clojure.lang.AFn/invoke
datetime.clj: 784 tech.v2.datatype.datetime/packed-local-date-time->milliseconds-since-epoch
from tech.datatype.
Also another three problems:
(nth (ds/column (ds/->dataset {:dt [(java.time.LocalDateTime/of 2020 01 01 11 22 33)
(java.time.LocalDateTime/of 2020 10 01 01 01 01)]}) :dt) 0)
;; => 568580556948144360 (expecting date, since can't convert to milliseconds)
;;
(dtype/get-value (ds/column (ds/->dataset {:dt [(java.time.LocalDateTime/of 2020 01 01 11 22 33)
(java.time.LocalDateTime/of 2020 10 01 01 01 01)]}) :dt) 0)
;; throws an exception:
1. Unhandled java.lang.Exception
Invalid datatype: null
casting.clj: 87 tech.v2.datatype.casting/ensure-valid-datatype
casting.clj: 84 tech.v2.datatype.casting/ensure-valid-datatype
datatype.clj: 450 tech.v2.datatype/->reader
datatype.clj: 443 tech.v2.datatype/->reader
RestFn.java: 423 clojure.lang.RestFn/invoke
column.clj: 184 tech.ml.dataset.impl.column.Column/__GT_reader
base.cljc: 55 tech.v2.datatype.base$get_value/invokeStatic
base.cljc: 50 tech.v2.datatype.base$get_value/invoke
datatype.clj: 160 tech.v2.datatype/get-value
datatype.clj: 157 tech.v2.datatype/get-value
;;
;; column with missing is not convertible to milliseconds
(dto/->milliseconds (ds/column (ds/->dataset {:dt [(java.time.LocalDateTime/of 2020 01 01 11 22 33)
nil
(java.time.LocalDateTime/of 2020 10 01 01 01 01)]}) :dt))
1. Caused by java.time.DateTimeException
Invalid value for MonthOfYear (valid values 1 - 12): 0
ValueRange.java: 311 java.time.temporal.ValueRange/checkValidValue
ChronoField.java: 717 java.time.temporal.ChronoField/checkValidValue
LocalDate.java: 269 java.time.LocalDate/of
PackedLocalDate.java: 81 tech.v2.datatype.PackedLocalDate/asLocalDate
PackedLocalDateTime.java: 70 tech.v2.datatype.PackedLocalDateTime/asLocalDateTime
datetime.clj: 779 tech.v2.datatype.datetime/packed-local-date-time-reader/reify
unary_op.clj: 277 tech.v2.datatype.unary-op/fn/reify
reader.clj: 343 tech.v2.datatype.reader/fn/reify
LongReaderIter.java: 21 tech.v2.datatype.LongReaderIter/nextLong
LongIterator.java: 43 it.unimi.dsi.fastutil.longs.LongIterator/next
from tech.datatype.
Fixing to-milliseconds
with a test now.
As to the first we do have support for java.time.LocalTime and a packed version of that but we use a specific packed class as opposed to a (normal) definition like yours. Yes it is possible but I am not totally clear on what you want.
- Would you liked to have
packed-local-time
datatype that compresses a local time into an integer? - Would you like to have a formalized definition for
packed-local-time
that is simply a 32bit integer milliseconds since midnight?
We have 1 and having 2 would probably be very wise. But I just used TableSaw's packedlocaltime struct. I am not strongly attached that that in any way and feel a formalized definition is better by a lot.
from tech.datatype.
Hmm, that last one is hard architecturally. datatype
doesn't know anything about missing and it appears that the default missing value indicator for longs fails to convert to a valid localdatetime
. What you want I think is to replace missing with 0 in the millisecond list but I am not totally sure.
I think the column's default object reader should unpack the source data. That would avoid at least some of these issues and would have avoided confusion and quite a bit of discussion on this issue earlier. Somewhere in there is probably the issue with get-value.
from tech.datatype.
The case is: I want to replace missing values in the column for every possible date/time type - regardless packed or not packed representation. One of the strategy I use is linear interpolation. To do this for dates I convert dates to milliseconds, apply lerp, and convert back to the storage datatype.
The entry point is this function:
;; using `dto` functions
(defn- lerp-time
[datatype mn mx steps]
(let [vs (lerp-long (dto/->milliseconds mn) (dto/->milliseconds mx) steps)]
(seq (dto/milliseconds->datetime datatype vs))))
;; or using `dt` functions
(defn- lerp-time
[datatype mn mx steps]
(let [vs (lerp-long (dt/to-milliseconds mn datatype) (dt/to-milliseconds mx datatype) steps)]
(map #(dt/from-milliseconds % datatype) vs)))
datatype
- the same as column datatypemn
,mx
are dates/times surrounding gap in datasteps
- number of missing values to fill
Then I update reader to fill the gaps, which works for numbers perfectly well, but not for dates since I can't get conversion to milliseconds working.
With following dataset I have mentioned issues.
(ds/->dataset {:dt [(java.time.LocalDateTime/of 2020 01 01 11 22 33)
nil
(java.time.LocalDateTime/of 2020 10 01 01 01 01)]})
- can't convert whole column with
dto/->milliseconds
- exception - value taken by
dtype/get-value
ornth
return packed representation, so I have to usedt/to-milliseconds
- exception dt/to-milliseconds
works on LocalTime (but not packed),dto/->milliseconds
doesn't.
from tech.datatype.
I think the column's default object reader should unpack the source data. That would avoid at least some of these issues and would have avoided confusion and quite a bit of discussion on this issue earlier. Somewhere in there is probably the issue with get-value.
Yes, as I said earlier, for user who works on the level of dataset and sometimes column, internal representation is not important. When I put LocalDateTime I should get back LocalDateTime.
No one knows what to do with :packed-local-date-time
from the following sequence which is just sequence of longs. Seeing it I don't even know is it zoned version or maybe local, maybe instant?
(seq (ds/column (ds/->dataset {:dt [(java.time.LocalDateTime/of 2020 01 01 11 22 33)
nil
(java.time.LocalDateTime/of 2020 10 01 01 01 01)]}) :dt))
;; => (568580556948144360 nil 568590452383613928)
from tech.datatype.
For my case to fill missing values I think I just need dt/to-milliseconds
and dt/from-milliseconds
working on every supported datetime datatypes.
from tech.datatype.
They do as long as you don't access missing values:
tech.ml.dataset.datetime-test> (def millis (dtype-dt-ops/->milliseconds (ds/column (ds/->dataset {:dt [(java.time.LocalDateTime/of 2020 01 01 11 22 33)
nil
(java.time.LocalDateTime/of 2020 10 01 01 01 01)]}) :dt)))
#'tech.ml.dataset.datetime-test/millis
tech.ml.dataset.datetime-test> (nth millis 0)
1577877753000
tech.ml.dataset.datetime-test> (nth millis 2)
1601514061000
tech.ml.dataset.datetime-test> (nth millis 1)
Execution error (DateTimeException) at java.time.temporal.ValueRange/checkValidValue (ValueRange.java:311).
Invalid value for MonthOfYear (valid values 1 - 12): 0
from tech.datatype.
Ok! This is a good hint! Now obvious :)
Edit: but I think I will wait for fixes, they simplify the things for me.
from tech.datatype.
Works! Thank again for taking care of the issues so quickly!
(replace-missing (ds/->dataset {:dt [(java.time.LocalDateTime/of 2020 01 01 11 22 33)
nil nil nil nil nil
(java.time.LocalDateTime/of 2020 10 01 01 01 01)]}) :lerp)
;; => _unnamed [7 1]:
| :dt |
|-------------------------|
| 2020-01-01T11:22:33 |
| 2020-02-16T01:38:57.667 |
| 2020-04-01T15:55:22.333 |
| 2020-05-17T06:11:47 |
| 2020-07-01T20:28:11.667 |
| 2020-08-16T10:44:36.333 |
| 2020-10-01T01:01:01 |
from tech.datatype.
Related Issues (13)
- Support for outlier using quartiles HOT 1
- dfn/round doesn't work on generic lists HOT 1
- Sort fails on readers HOT 1
- Date/time support
- [color-gradients] Idea for a higher-level interface. HOT 2
- dfn/reduce-+ NPE, depending on data size HOT 3
- Support clojure.lang.APersistentVector. HOT 1
- operator as sequential HOT 3
- Support UUID's as first class object datatypes. HOT 2
- seq of keywords can't be converted to reader HOT 4
- Support for "reindexing" datetime column? HOT 19
- reader can have any type HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tech.datatype.