Giter VIP home page Giter VIP logo

bytefish.de's Introduction

https://www.bytefish.de is generated by Pelican.

You can install Pelican by using pip:

pip install pelican

Depending on your Python installation, you might need to install the following packages to generate the pages:

pip install markdown
pip install BeautifulSoup4

To build the pages, simply run the following command from the root folder:

pelican -s settings/settings_dev.py .

The output will be written to the folder settings/output/DEV.

bytefish.de's People

Contributors

austince avatar baywet avatar bovreuil avatar bytefish avatar roy-tate avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar

bytefish.de's Issues

Query on spring abstractroutingdatasource advantage

I am trying to understand more on the abstractRouting that spring offers.

  1. Wouldn’t a hashmap suffice the purpose of the lookup. I am trying to understand what advantage does the thread bound context brings in abstract routing ?
  2. What advantages does the AbstractRoutingDatasource brings in specific, since the same could be achieved using a simple if-else logic to determine the target from the hashmap?

MarkdownEmojiExtension Error

The site is not generated.

The following error occurs: "TypeError: cannot convert dictionary update sequence element #0 to a sequence".

I use pelican 4.0.1 and Markdown 3.0.1.

Question: Usage of bytea

I tried to use AbstractMapping with bytea and I got this error. I am able to create pgConnection and I used a simple Java object with UUID and byte[]. I used mapByteArray and I get this error.

Exception in thread "main" org.postgresql.util.PSQLException: ERROR: syntax error at or near ")"
Position: 28
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2510)
at org.postgresql.core.v3.QueryExecutorImpl.processCopyResults(QueryExecutorImpl.java:1172)
at org.postgresql.core.v3.QueryExecutorImpl.startCopy(QueryExecutorImpl.java:881)
at org.postgresql.copy.CopyManager.copyIn(CopyManager.java:44)
at org.postgresql.copy.PGCopyOutputStream.(PGCopyOutputStream.java:45)
at de.bytefish.pgbulkinsert.PgBulkInsert.saveAll(PgBulkInsert.java:54)
at com.aexp.rtf.active.bootstrap.PostGresProvenanceSourceTest.bulkInsertEventStateData(PostGresProvenanceSourceTest.java:135)
at com.aexp.rtf.active.bootstrap.PostGresProvenanceSourceTest.main(PostGresProvenanceSourceTest.java:50)
Suppressed: de.bytefish.pgbulkinsert.exceptions.BinaryWriteFailedException: java.lang.NullPointerException
at de.bytefish.pgbulkinsert.pgsql.PgBinaryWriter.close(PgBinaryWriter.java:174)
at de.bytefish.pgbulkinsert.PgBulkInsert.saveAll(PgBulkInsert.java:58)
... 2 more
Caused by: java.lang.NullPointerException
at de.bytefish.pgbulkinsert.pgsql.PgBinaryWriter.close(PgBinaryWriter.java:169)
... 3 more

Update for restsharp_custom_json_serializer.md

I'm sorry for not sending a PR, I'm in the middle of other work and it's hard to switch.
Regarding: https://github.com/bytefish/bytefish.de/blob/master/blog/restsharp_custom_json_serializer.md

RestSharp changed a bit, I updated your code a little, you can find the snippets in
adamfisher/RestSharp.Serializers.Newtonsoft.Json#8 (comment)

There's also a tiny issue of NewtonsoftJsonSerializer.Default being not defined on your NewtonsoftJsonSerializer class, but it shouldn't be a problem for anyone except total novices.

BulkInsert ennd up Inserting same Object record multiple times when some Object properties are equal.

I have a List objects= new MyObject(name, descr, type, prop4);
I have noticed in some cases when some properties of the of different given object instance are the same, BulInsert re-uses the previous instance values.

For Instance, I have;
myObject1 with name="vector_db", descr="xxxxxx",type="postgis (JNDI)", otherProps="unique prop1";
myObject2 with name="vector_db", descr="xxxxxx",type="postgis (JNDI)", otherProps="unique prop2";
myObject3 with name="vector_db", descr="xxxxxx",type="postgis (JNDI)", otherProps="unique prop3";

myObject4 with name="client_db", descr="xxxxxx",type="postgis", otherProps="unique prop";

In the database, the final data inserted for myObject1,myObject2 and myObject3 is the same despite the fact that otherProps is not suppose to be the same.

What strategy does BukInsert use that makes it insert the same record multiple times when and then it ignores other records?

precision/recall

Would you also add precision/recall analysis for the project?

Thank you for the excellent article. Unfortunately, it did not work for me.

I followed the article and created my table. However, I got this error while inserting a row in the audit table. Instead of inserting an integer in column "rev", it tried to insert a hashmap.

"ClassCastException: java.util.HashMap cannot be cast to java.lang.Integer"

Some additional information from the log:
Hibernate: select nextval ('hibernate_sequence')
Hibernate: insert into ims_audit.revinfo (revtstmp, rev) values (?, ?)
2020-02-21 08:31:34 TRACE o.h.type.descriptor.sql.BasicBinder - binding parameter [1] as [BIGINT] - [1582291894650]
2020-02-21 08:31:34 TRACE o.h.type.descriptor.sql.BasicBinder - binding parameter [2] as [INTEGER] - [71]
Hibernate: insert into ims_audit.private_charging_party (REVTYPE, REVEND, address_line1, address_line2, cell_number, city, country, county, date_of_birth, email_address, extension, fax_number, first_name, home_phone, is_hispanic, language_code, last_name, mediation_reply, middle_initial, name_prefix, name_suffix, national_origin, organization_name, other_language, position_statement_release_dte, position_statement_request, position_statement_request_dte, race, sex, sign_language_need, state, work_phone, zip_code, charging_party_id, REV) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)

The problem is here:
binding parameter [27] as [INTEGER] - [{REV=PrepaRevisionEntity [getRevisionDate()=Fri Feb 21 08:31:34 EST 2020, hashCode()=1743932516, toString()=BaseRevisionEntity [revisionNumber=71, revisionTimestamp=1582291894650], getClass()=class gov.eeoc.prepa.ws.modern.entity.common.SampleRevisionEntity], respondentId=66}]

2020-02-21 08:31:34 ERROR org.hibernate.AssertionFailure - HHH000099: an assertion failure occurred (this may indicate a bug in Hibernate, but is more likely due to unsafe use of the session): java.lang.ClassCastException: java.util.HashMap cannot be cast to java.lang.Integer

Auto Configuere data source in data base per tenant

Hi Thanks for the wonderful article . I am working on an application where I want to implement multitenancy with Data base per tenant. I have used same approcah but my question is about maintannance. This approch is easy to mantain if we have fixd tenant but what if tenants keep increasing. How can we mantain this scenario without much changes in application level code changes as we are adding data source for each tenant.

Confusion regarding radius and sampling points in real life problems.

I went through several LBP's. There are radius and sampling points. If we draw a circle with radius then the points intersecting the pixels are the pixels that are to be taken for while forming the binary code. I cannot figure out few of them in the image.
<Radius| Sampling Points|>
1 | 4 |  
1 | 8 |  
1 | 16 |  
2 | 4 |  
2 | 8 |  
2 | 16 |  

test

NEST not working

Hello, I am getting this error. Please enlighten me on what I am missing:
"Invalid NEST response built from a unsuccessful () low level call on POST: /documents/_search?typed_keys=true\r\n# Audit trail of this API call:\r\n - [1] BadRequest: Node: http://localhost:9200/ Took: 00:00:00.0879444\r\n - [2] CancellationRequested: Took...

neo4j_at_scale_airline_dataset

Thanks a lot for your article.

  • LOAD CSV is fully transactional

  • if possible you should create a constraint/index for the id-lookup fields of each entity

  • the non-transactional bulk-importer is used with neo4j-import or neo4j-admin import for offline initial loading. It can sustain about 1M records / s .

  • neo4j doesn't really work well without an SSD (esp. in terms of write performance)

  • thanks a lot for the feedback on unwind and the foreach trick for optional relationships. you are right, both could be better.

For your query, could you try this variant and see if it makes a difference.

MATCH (:Reason {description:'Weather'})
MATCH (a:Airport)
WITH r, a, size( (a)<-[:ORIGIN]-() ) as total
MATCH (a)<-[:ORIGIN]-(f:Flight)-[:DELAYED_BY]->(r)
WITH a, total, COUNT(f) AS num
RETURN a.abbr + ' - ' + a.name AS Airport, ((1.0 * num) / total) * 100 AS Percent
ORDER BY Percent DESC
LIMIT 10

Please note that your query is a full graph query which is more analytical. Cypher was originally built for local-transactional queries but is getting there.
As any database Neo4j benefits from being able to keep as much data as possible in RAM, so a configuration setup that utilizes that makes sense.
And Neo4j Enterprise (which has a free developer edition) has a faster query engine.

Question on Face Recognition

Greetings,

I could find no other way to contact you so I hope this is the right method.

I am a novice programmer who collects pics of models from the 1980s and earlier. I saw your article on Face Recognizer with Open CV here http://docs.opencv.org/trunk/modules/contrib/doc/facerec/index.html

The link at the bottom of the page goes no where so I began to search. I can't find this software at Sourceforge either. Never the less, here is my question. I am willing to learn how to program in C++ if this software will help me name some of the unknowns I have in my collection. Basically, will the face recognition be able to help me add names to the unknowns I have. If not, then I don't want to move down this path since it is a dead end.

Here is what I was thinking. I am running Windows 7. I would have a known face folder that would contain at least one face picture of a known model (more might be better to get identification). The software would allow me to select an unknown models picture in jpg format and then compare it to the pictures in the known folder. It would then show me the unknown on the left and the potential known on the right and let me 'scroll' through the potential matches and let me select the name for the unknown. At this point, I could go to Windows explorer and rename the unknown picture or I could do the renaming within the software. Lastly, it might save time to have the data from the pictures in the known folder saved in a file to reduce the search comparison time. I see you like to use Excel files so that would be fine. Maybe it would be nice to have the capability to rebuild the known picture database as a subfunction of the software.

Is the software you have provided up to this task or am I asking for too much and getting in way above my head?

Thank you for taking the time to look at my question and thoughts.

For the project https://bytefish.de/blog/fcmsharp/

Hello,

Can you add the possibility to resend again a failed message ?
For example, Firebase supports max 4K of message length and if I have a list of many numeric ids I should split them in a for statement like this :

    public static void SendIds(IEnumerable<int> ids, string operationType, string topicModel)
    {
        for (int i = 0; i < ids.Count(); i = i + 100)
        {
            var items = ids.Skip(i).Take(100);
            var idsDict = new List<Dictionary<string, string>>();
            idsDict.Add(new Dictionary<string, string>() { { "type", operationType }, { "ids", string.Concat("[" + string.Join(",", items.ToArray()) + "]") } });
            Task.Run(() => SendAsyncFcmMessage(idsDict[0], topicModel));
        }
    }

My question is that if it fails for an interval of 100 ids, I want to submit again for this interval.
I can see you have FcmHttpClient and here it returns
return JsonConvert.DeserializeObject(httpResponseContentAsString);

But I think I need HttpResponseMessage httpResponseMessage in return because it has a property IsSuccessStatusCode which can be true/false.

dotnet quit unexpectedly on macOS

Hi,

Can anyone please help me with this issue which I am getting while running dotnet on my mac?
I came across a good number of solutions, which I tried implementing but I think I am just getting confused with so many options to resolve this issue.

Details are as followed:

.NET Command Line Tools (2.1.4)

Product Information:
Version: 2.1.4
Commit SHA-1 hash: 5e8add2190

Runtime Environment:
OS Name: Mac OS X
OS Version: 10.13
OS Platform: Darwin
RID: osx.10.12-x64
Base Path: /usr/local/share/dotnet/sdk/2.1.4/

Microsoft .NET Core Shared Framework Host

Version : 2.0.5
Build : 17373eb129b3b05aa18ece963f8795d65ef8ea54

Provide ability to call BulkInsert from a Spring @Transactional wrapped method

I have the following method that is wrapped in Spring @transactional and calls BulkInsert:

    @Transactional
    public RequestValidator saveProject(Project project)  {
        RequestValidator<ProjectDto> requestValidator = new RequestValidator<>();
        project.setStatus(project.getProjectType() == ProjectType.CHAINWIDE ?  ProjectStatus.InDefinition : ProjectStatus.InProgress);
        Project savedProject = projectRepository.saveAndFlush(project);
        requestValidator.setDto(new ProjectDto(savedProject));
        bulkInsertService.saveAll(requestValidator);
        return requestValidator;
    }

The issue is that BulkInsertService is "unaware" of Spring Transactions.

@Service
public class BulkInsertService extends ServiceBase {
	private Logger log = LoggerFactory.getLogger(BulkInsertService.class);
	private DataSource dataSource;
	
	@Autowired
	public void setDataSource(DataSource dataSource) {
		this.dataSource = dataSource;
	}
	
	public List<ProjectPricing> saveAll(List<ProjectPricing> projectPricings) {
		PgBulkInsert<ProjectPricingDto> bulkInsert = new PgBulkInsert<>(new ProjectPricingMapping());
		
		final int bulkSize = 100000;
        try(BulkProcessor<ProjectPricingDto> bulkProcessor = new BulkProcessor<>(new BulkWriteHandler<>(bulkInsert, new DataSourceWrapper(dataSource)), bulkSize)) {
            // Now process them with the BulkProcessor:
            for (ProjectPricing p : projectPricings) {
                bulkProcessor.add(new ProjectPricingDto(p));
            }
        } catch (Exception e) {
			log.error("error running bulkprocessor on project pricing insert", e);
		}
        return projectPricingRepository.findByProjectId(projectPricings.get(0).getProject().getId());
	 }
    }

How can I use BulkInsert with Spring @transactional (i.e. use it within a Spring @transactional wrapped method so that it can be used like any other Spring class method annotated as @transactional and nested within another @transactional method) ?

Spring Boot Multitenancy guide - Error with AbstractRoutingDataSource

Hello, I followed the following guide https://bytefish.de/blog/spring_boot_multitenancy/ regarding Spring Boot Multitenancy, but I have some issues with the AbstractRoutingDataSource.

My project compiles without any problem, but everytime I am trying to run the project, then have the following problem.

The problem is that it throws an IllegalStateException("Cannot determine target DataSource for lookup key [" + lookupKey + "]"). (You can see the output log in the attached file).
I tried to DEBUG the project, so when it accesses TenantAwareRoutingSource, then it goes to AbstractRoutingDataSource determineTargetDataSource() method. But inside this method, it throws the above exception.

I have attached the output log (including the error).
error.txt

Bear in mind that my project is a web service. So, I am using SoapUI to make the requests. That means that I cannot use the HTTP interceptor to retrieve the tenant name from the header, but I already pass the tenant name as a parameter in the WSDL.

I am looking forward to your reply.

Thank you
Konstantinos

Location of Octave code for Discriminant Analysis

I would like to use your Octave code for Discriminant Analysis (10/2011)

-- I'm kind of new to GitHub so probably I just spaced it -- but -- I'm not finding the code.

Can you tell me where to look?

Clark

Update async multi-tenant example

Have you thought about updating spring_boot_multitenancy_async.md to use Spring Boot's Webflux instead of having to code it old school style with your own threads and Async everywhere?

Detecting extra columns after last mapped column

Thanks for this great library. Works fine, only issue i have is that it seems that the library does not detect that a line contains extra columns after the last mapped column. Missing columns are detected extra columns are not detected.

Select tenant

When put this code in cotroller, produce error
ThreadLocalStorage.setTenantName(tenantName);

AbstractMap used by PersonMap in SqlMapper blog entry doesn't match java.util.AbstractMap

In https://github.com/bytefish/bytefish.de/blob/master/blog/sqlmapper.md PersonMap extends AbstractMap. The built in java.util.AbstractMap doesn't work with the code you provided. For one, it expects two generic parameters and you're only providing one. Second, there is no constructor with two strings in AbstractMap. Lastly, there is no method named "map."

Are you referencing a different AbstractMap from a different framework?

Shared entities

How about shared entities, that is, entities used in multiple modules?
Are these located in the core module?
Let's call these the core entitites.
Is there any code in place that ensure the mapping (and consequently the database tables) of core entitites aren't changed or broken by the mapping in the modules?

How the tenant can be changed on runtime

Hello, I followed the guide https://bytefish.de/blog/spring_boot_multitenancy/ regarding Spring Boot Multitenancy, It is what I was loking for to solve an issue in my first spring boot proyect.

My project compiles without any problem, but everytime I am trying to run the project I get java.lang.NullPointerException. I don't know what is going on, but let's suppose every thind works fine, could you tell me how the tenant can be changed on runtime???

Regards,
Hugo

Add a flow diagram

Hi,

First I would like to thank you for your time and for this article.
I would like to suggest to add a flow diagram for this project for a better understanding. What happens from the request until the real response.

I didn't compliantly understand the flow but if you want I can try to create one and later to adjust the diagram based on your suggestions.

I think this could help. :)

PS: Nice work!

'Method not found: 'Void TinyCsvParser.CsvParserOptions..ctor(Boolean, Char)

Hi,

I am trying to learn elastic search using .net. I downloaded your code and trying to run this. But this is giving below error when its trying to call below code

private static IEnumerable GetStations(string fileName)
{
return Parsers.StationParser
.ReadFromFile(fileName, Encoding.ASCII)
.Where(x => x.IsValid)
.Select(x => x.Result)
.AsEnumerable();
}:

Error is as below
'Method not found: 'Void TinyCsvParser.CsvParserOptions..ctor(Boolean, Char)

Please help me in fixing this issue.

java.lang.NoClassDefFoundError: de/bytefish/jtinycsvparser/tokenizer/ITokenizer

Hi, Sorry I am new to Apache Flink, I have tried to setup the project though as it.

When I run flink run -c csv.sorting.PrepareWeatherData on an already Maven compiled jar file, it throws a NoClassDefFoundError error. The jtinycsvparser dependancy is already added to POM with version 1.2.

Also can you please guide a little how to run this project? I mean which file to execute? Again sorry for my stupid question, since I am new to Flink.

Please take a look at the attached screenshot.

Thanks in advance.

Selection_011

Sql minor bug

In stored procedure [Functions].[GetGraphSchema](@NodesSchemaName nvarchar(max))

FROM [master].[sys].[tables] as [node]
should be replaced by
[sys].[tables]

else the graph does not display as no nodes are returned.

Subject: Parsing Command Line Arguments in .NET || Topic: Add basic arg validation

First, thanks a lot for the great starting point on building my own very simple command line arg helper.

One very simple thing I added was an last parsing step, to remove options passed without an name (only - or --).

I changed the method to return an info about the parsing process:
public static IReadOnlyList<CommandLineOption> ParseOptions(string[] arguments, out string info) { ...

And as a very last step, I added:

// Clean up invalid options (e.g. - or -- without name)
int invalidOptions = results.RemoveAll(option => string.IsNullOrEmpty(option.Name));

info = $"Parsed <{results.Count}> valid and <{invalidOptions}> invalid command line args";
return results;

CORS Filter in JerseyExample project does not catch all Preflight requests

Hi,
I tried to setup a project that has to use CORS and used your Tutorial at https://bytefish.de/blog/cors_with_jersey/ to do so. Unfortunately I cannot get it wo word. And I also do not really understand how it should.
I get that the base resource should handle all the Preflight-requests. But how dies that work if you use @path with methods in a Resource.
So for example you have a Jersey application at "/api" and a resource at "/service". So of course a preflight request to "/api/service" will be caught by the BaseResource implementation. But how should that also work for a path like "/api/service/add" or "/api/service/multiply"?
The reason why I'm asking is that for me it doesn't. Browsers keep complaining about failed Preflight requests. It works if I just define a new endpoint at for example "/api/service/add" using the OPTIONS-method. Then the Preflight request will be successfull for that endpoint. So do you just assume that there are no path-parts below the resource path defined at class level except for parameters? Or am I missing something?

Great article

Thank you for posting the article on separating EF Core from the domain. It's excellent!

Do you plan to update it to EF 3? I'm kind of new to EF and am still unsure if there are any changes in v. 3 which may change, hopefully simplify the approach even more.

Adding null in AuditQueryResultUtil

In AuditQueryResultUtil.java :

If type.isInstance is false, then what? Do we add null value entity in the AuditQueryResult? Can you explain this decision?

This is in regard to the item[1], item[2] instance check.

Getting Error : The registration token is not a valid FCM registration token

I am using right device token : "3215575407206244294-2953275565396661154-3"
{
"error": {
"code": 400,
"message": "The registration token is not a valid FCM registration token",
"errors": [
{
"message": "The registration token is not a valid FCM registration token",
"domain": "global",
"reason": "badRequest"
}
],
"status": "INVALID_ARGUMENT"
}
}

FCM with GoogleCredentials and SOCKs proxy

Hi,
Could we use fcm-java to sign to FCM server with GoogleCredetial json file when I get from FCM server ?
And how about using SOCKs Proxy in fcm-java ?

@test
public void testFcmClientWithProxySettings() {

    // Create Settings:
    IFcmClientSettings settings = new MockFcmClientSettings();

    // Create the HttpClient:
    HttpClient httpClient = new HttpClient(settings);

    // And configure the HttpClient:
    httpClient.configure((httpClientBuilder -> {

        // Define the Credentials to be used:
        BasicCredentialsProvider basicCredentialsProvider = new BasicCredentialsProvider();

        // Set the Credentials (any auth scope used):
        basicCredentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials("your_username", "your_password"));

        // Now configure the HttpClientBuilder:
        httpClientBuilder
                // Set the Proxy Address:
                .setProxy(new HttpHost("your_hostname", 1234))
                // Set the Authentication Strategy:
                .setProxyAuthenticationStrategy(new ProxyAuthenticationStrategy())
                // Set the Credentials Provider we built above:
                .setDefaultCredentialsProvider(basicCredentialsProvider);
    }));

    // Finally build the FcmClient:
    IFcmClient client = new FcmClient(settings, httpClient);
}

I refer this test and put them in my case with Socks proxy and Json google credetial

Please give me the suggestion in this case.

Thanks,
Qui

Alternative Menu for Mobile Devices

It is incredibly hard to hit the right menu, when being on a mobile device.

There are two options to solve the problem:

  • Change Menu based on a @media query
  • Create a new menu for both desktop and mobile pages

Reconsider the usage of table elements

Tables are a problem when rendering the page for mobile devices, because they do not scale well and a small device-width can't contain the information. If possible, large tables should be thrown out and replaced with a simpler approach to show the data.

Can't write CLR type System.DateTime to database type timestamp with time zone

I am using NodaTime in my application and while using copyHelper.SaveAll(connection, entitys); I am getting "Can't write CLR type System.DateTime to database type timestamp with time zone" as error.
In my application what i have done :

  1. services.AddDbContextPool((options) =>
    {
    options.UseNpgsql(connectionStrings, optionsBuilder =>
    {
    optionsBuilder.UseNodaTime();
    });
    });

  2. var copyHelper = new PostgreSQLCopyHelper("xx", "yyyyy")
    .Map("columnname", x => x.columnname, NpgsqlDbType.TimestampTZ)
    copyHelper.SaveAll(connection, entitys);

Hence, I am getting issue "Can't write CLR type System.DateTime to database type timestamp with time zone".
is NodaTime supported into this package.

Update PgBulkInsert Posts

I have updated the PgBulkInsert API, which includes breaking changes. All posts for PgBulkInsert have to be updated to reflect the latest changes or refer to the specific 1.4 version.

Question on dynamic load of the data source

Hello Phillip,

First of all thanks for such a descriptive article on the multi tenant support.

I had a few queries with respect to the article and would like to know your thoughts on the same:

During dynamic load I suspect there might be a condition where while updating the configuration we might end up in a case where any specific transaction can get impacted in case a data source connection is closed.

I just tried running some junits which actually led me to this behaviour.. so I suspect a lock while doing an update would be required. Please correct me if you think this is not right?

Also I wanted to understand the role of AsyncConfig.java? I might be missing some very basic trick here but when I tried running the original code and found that in case an automated junit is run then this class causes an error as the tenant name is never set and hence there is no datasource that can be found for null. Hence the application start up fails.

Again thank you for your time and effort in this article.
Thanks
Vicky

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.