Giter VIP home page Giter VIP logo

speechd's Introduction

speech-dispatcher

Common interface to speech synthesis

Introduction

This is the Speech Dispatcher project (speech-dispatcher). It is a part of the Free(b)soft project, which is intended to allow blind and visually impaired people to work with computer and Internet based on free software.

Speech Dispatcher project provides a high-level device independent layer for access to speech synthesis through a simple, stable and well documented interface.

Documentation

Complete documentation may be found in doc directory: the speech dispatcher documentation: doc/speech-dispatcher.html, the spd-say documentation: doc/spd-say.html, and the SSIP protocol documentation: doc/ssip.html.

Read doc/README for more information.

This documentation is also available online: the speech dispatcher documentation, the spd-say documentation, and the SSIP protocol documentation.

The key features and the supported TTS engines, output subsystems, client interfaces and client applications known to work with Speech Dispatcher are listed in overview of speech-dispatcher as well as voices settings and where to look at in case of a sound or speech issue.

Mailing-lists

There is a public mailing-list speechd-discuss for this project.

This list is for Speech Dispatcher developers, as well as for users. If you want to contribute the development, propose a new feature, get help or just be informed about the latest news, don't hesitate to subscribe. The communication on this list is held in English.

Development

Various versions of speech-dispatcher can be downloaded from the project archive.

Bug reports, issues, and patches can be submitted to the github tracker.

The source code is freely available. It is managed using Git. You can use the GitHub web interface or clone the repository from:

https://github.com/brailcom/speechd.git

Modules for different speech synthesis backends can easily be developped in different ways. This allows to integrate all kinds of speech syntheses, be they C libraries, external commands, or even http services, and whatever their licences since the interface between the speechd server and the syntheses is a mere pipe between processes with a very simple protocol. More details are available in the Output Modules documentation

Rust bindings are currently developed separately. You can use the GitLab web interface or clone the repository from:

https://gitlab.com/ndarilek/speech-dispatcher-rs.git

A Java library is currently developed separately. You can use the GitHub web interface or clone the repository from:

https://github.com/brailcom/speechd-java.git

To build and install speech-dispatcher and all of it's components, read the file INSTALL.

People

Speech Dispatcher is being developed in closed cooperation between the Brailcom company and external developers, both are equally important parts of the development team. The development team also accepts and processes contributions from other developers, for which we are always very thankful! See more details about our development model in Cooperation. Bellow find a list of current inner development team members and people who have contributed to Speech Dispatcher in the past:

Development team:

  • Samuel Thibault
  • Jan Buchal
  • Tomas Cerha
  • Hynek Hanke
  • Milan Zamazal
  • Luke Yelavich
  • C.M. Brannon
  • William Hubbs
  • Andrei Kholodnyi

Contributors: Trevor Saunders, Lukas Loehrer,Gary Cramblitt, Olivier Bert, Jacob Schmude, Steve Holmes, Gilles Casse, Rui Batista, Marco Skambraks ...and many others.

Licensing

Speech Dispatcher uses several layers of software, to allow for flexible licensing.

The central speechd server is essentially GPLv2.

The C api client library is essentially LGPL2.1-or-later, which thus allows to use it in various applications with little licensing concerns. It is connected to the central server through a socket with the SSIP protocol, in such a way that GPL licensing propagation doesn't apply.

The speech modules are connected to the central server through a pipe with a very simple protocol similar to SSIP, in such a way that GPL licensing propagation doesn't apply either.

To make writing speech modules simpler, a libspeechd_module library is provided under a BSD-2 license, which can thus be combined with essentially any other license.

Some more advanced module helpers are also provided under LGPLv2.1-or-later, but they are not mandatory.

In detail:

  • The speech-dispatcher server (src/server/ + src/common/) contains GPLv2-or-later and LGPLv2.1-or-later source code, but is linked against libdotconf, which is LGPLv2.1-only at the time of writing.

  • The speech-dispatcher modules (src/modules/ + src/common/ + src/audio/) contain GPLv2-or-later, LGPLv2.1-or-later, LGPLv2-or-later, and BSD-2 source code, but some parts are also linked against libdotconf, which is LGPLv2.1-only at the time of writing.

  • The spd-conf tool (src/api/python/speechd_config/), spd-say tool (src/clients/say), and spdsend tool (src/clients/spdsend/) are GPLv2-or-later.

  • The C API library (src/api/c/) is LGPLv2.1-or-later

  • The Common Lisp API library (src/api/cl/) is LGPLv2.1-or-later.

  • The Guile API library (src/api/guile/) contains GPLv2-or-later and LGPLv2.1-or-later source code.

  • The Python API library (src/api/python/speechd/) is LGPLv2.1-or-later.

  • All tests in src/tests/ are GPLv2-or-later.

Copyright (C) 2001-2009 Brailcom, o.p.s Copyright (C) 2018-2020, 2022, 2024 Samuel Thibault [email protected] Copyright (C) 2018 Didier Spaier [email protected]

This README file is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.

This README file is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details (file COPYING in the root directory).

You should have received a copy of the GNU General Public License along with this program. If not, see https://www.gnu.org/licenses/.

speechd's People

Contributors

alanc avatar cerha avatar cmb avatar comradekingu avatar cwendling avatar didierspaier avatar dusek avatar garycramblitt avatar gcasse avatar gladhorn avatar hammera avatar hmk46 avatar joshjama avatar jpwhiting avatar jtojnar avatar jvesouza avatar kovalevartem avatar lauramcastro avatar luzpaz avatar mohd-akram avatar nardol avatar nteodosio avatar ragb avatar razr avatar rffontenelle avatar santossi avatar sthibaul avatar themuso avatar tinaxd avatar williamh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

speechd's Issues

provide a proper espeak variant implementation

As reported by coffeeking on the freebsoft bug tracker:

โ€œThe espeak varient implementation in current speech-dispatcher is functional, but instead of providing a list of varients separate from the usual voices, it simply appends the varients onto the voice list, causing multi thousand long voice lists, causing lots of lag in orca when trying to navigate the lists. If speech-dispatcher were updated to provide a proper speech synth agnostic varient API, leaving the work up to the module maintainers this would correct the problem.โ€

Audio: playback is splitty when using indexing

When using e.g. the baratinoo module, the playback is splitty when enabling indexing (see commented module_marks_add call in baratinoo.c). This seems because module_tts_output() is synchronous and the pieces of audio do not chain well one to the other between the different module_tts_output() calls.

Add support for unicode font variants

Step to reproduce: In a terminal, execute: "spd-say ๐“ฏ๐“ธ๐“ธ"

Expected result: Something intelligible would be presented.

Actual result: "Letter 1d4ef, letter 1d4f8, letter 1d4f8"

Impact: These font variants cannot be presented in a sane fashion by any client of speech-dispatcher (including, but not limited to, Orca).

Rather than having every speech-dispatcher client do the translation, it would be great if speech-dispatcher could handle this on behalf of all clients. Thanks in advance!

Pronunciation of various words and i18n

As discussed on https://mail.gnome.org/archives/gnome-accessibility-devel/2008-April/msg00002.html , some words are mispronounced by speech syntheses (e.g. "ubuntu") because they are exceptions. This could be solved at the speech dispatcher level with a pronounciation dictionary, before handing to speech syntheses. We could prefeed such dictionary depending on the language, both for getting the pronounciation right (by using a language-specific word with appropriate syllables), and for getting the appropriate pronounciation ("Linux" is not pronounced the same in various languages).

[Feature Request] support isolation of clients with the same user ID

According to the docs about history handling, speechd currently assumes that all clients with the same user ID run inside a single security domain. It allows any client to retrieve the text any other client has asked it to speak since the daemon started, as long as both clients have the same user ID.

Technologies such as Snap and Flatpak try to improve the security of Linux desktop applications by putting each application in a sandbox with limited, isolated access to the host system. There does not seem to be a secure way to give a sandboxed application access to the speech dispatcher service of the host system because all connected clients have access to the entire history. On a blind personโ€™s desktop this could include every keystroke the user has typed for however long the history log is, assuming key echo is enabled.

As a result, sandboxed applications that use Speech Dispatcher currently bundle it inside of the sandbox, so that each application has its own "private" instance of Speech Dispatcher running. This works more or less, but it has the downside that speech dispatcher cannot coordinate simultaneous messages from multiple apps. When multiple sandboxed apps use Speech Dispatcher at the same time, the text reading overlaps.

In order to solve this issue, I would really like to give sandboxed apps access to the Speech Dispatcher instance of the host. This, however, would require adding a security policy to Speech Dispatcher which is suitable for untrusted clients (donโ€™t leak existence of other clients, block history access or limit it to the clientโ€™s own history, etc.)

What are your thoughts on this?

Note: You can take a look at the discussion about Text To Speech support in snapd for more context.

PulseAudio backend causes server to underrun in some configurations

I ran into this while investigating Firefox bug 1444567: on some systems, speech-dispatcher's PulseAudio backend causes the PA server to reconfigure itself for a latency so low that it causes audio in other applications to continuously underrun, effectively breaking normal audio playback.

speech-dispatcher's PA backend configures a 100 byte pa_buffer_attr.tlength here:
https://github.com/brailcom/speechd/blob/master/src/audio/pulse.c#L131

This seems strange - aside from being incredibly small (most systems can't reasonably service a sub-millisecond latency stream), PA expects a computed byte value, but speech-dispatcher's code doesn't account for sample rate, channel count, or sample format changes. For a typical 44.1kHz stereo f32 stream, a 100 byte tlength ends up requesting a 270 microsecond buffer.

Because this pa_buffer_attr is passed to pa_simple_new, there's no opportunity to pass specific pa_stream_flags_t values. The defaults for pa_simple_new include PA_STREAM_ADJUST_LATENCY, which I understand is what causes the server to reconfigure to service the requested latency rather than simply letting the stream's latency float up to the server's normal value.

It seems like speech-dispatcher's PA backend should request a more realistic latency value and scale it correctly for the requested sample rate, channel count, and sample format. What would break if tlength were set to -1, as the other pa_buffer_attr fields are? If the intention of low latency streams is to avoid losing buffered audio, the correct solution is to use pa_simple_flush (or, if that doesn't work, a timer to wait for the buffer to flush).

It's not clear what hardware/software/configuration combination causes the PA server to enter this state. On my Fedora 27 system, I can see speech-dispatcher connecting to PA and requesting unusually low latency streams, but it doesn't cause the PA server to behave badly. On other systems, such as the user's in the Firefox bug and the other cases I've linked below, PA ends up in a bad state until speech-dispatcher is killed or PA is restarted (effectively disconnecting the s-d streams). So there's possibly a component a PA bug or misconfiguration at play too.

This has also turned up in other applications, e.g. Mumble and for an Ubuntu user.

CCing @ford-prefect as a PulseAudio expert in case he has any input on this.

Add pitch as an option for capitalization presentation

As reported by Joanmarie:
โ€œset_cap_let_recogn() lets one choose amongst:

  • none
  • spell
  • icon

Orca has always provided this via pitch. I've implemented support in Orca for the above and would love to see pitch being something I could set via the same API rather than it being entirely separate.

Please and thank you. :)โ€

baratinoo: assertion failure/crash related to new marks

I'm not sure yet what leads to this, but there is a serious bug in the current Baratinoo module, which can in some cases lead to a Pulse (in my case) assertion failure apparently due to a race condition/bad mutex:

Assertion 'pa_atomic_load(&(s)->_ref) >= 1' failed at pulse/stream.c:336, function pa_stream_get_state(). Aborting.

or sometimes

Assertion 'pthread_mutex_destroy(&m->mutex) == 0' failed at pulsecore/mutex-posix.c:83, function pa_mutex_free(). Aborting.
Assertion 'pa_atomic_load(&(c)->_ref) >= 1' failed at pulse/context.c:1063, function pa_context_get_state(). Aborting.

or even a nice glibc backtrace:

*** Error in `/usr/lib/speech-dispatcher-modules/sd_baratinoo': corrupted double-linked list: 0x00007f1148000f20 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x70bfb)[0x7f116bf2abfb]
/lib/x86_64-linux-gnu/libc.so.6(+0x76fc6)[0x7f116bf30fc6]
/lib/x86_64-linux-gnu/libc.so.6(+0x7733d)[0x7f116bf3133d]
/lib/x86_64-linux-gnu/libc.so.6(+0x78dfa)[0x7f116bf32dfa]
/lib/x86_64-linux-gnu/libc.so.6(__libc_calloc+0xb6)[0x7f116bf359b6]
/lib64/ld-linux-x86-64.so.2(+0xb2d6)[0x7f116ed292d6]
/lib64/ld-linux-x86-64.so.2(+0x587d)[0x7f116ed2387d]
/lib64/ld-linux-x86-64.so.2(+0x880c)[0x7f116ed2680c]
/lib64/ld-linux-x86-64.so.2(+0x13bd4)[0x7f116ed31bd4]
/lib64/ld-linux-x86-64.so.2(+0xf704)[0x7f116ed2d704]
/lib64/ld-linux-x86-64.so.2(+0x136c9)[0x7f116ed316c9]
/lib/x86_64-linux-gnu/libc.so.6(+0x11f0dd)[0x7f116bfd90dd]
/lib64/ld-linux-x86-64.so.2(+0xf704)[0x7f116ed2d704]
/lib/x86_64-linux-gnu/libc.so.6(+0x11f16f)[0x7f116bfd916f]
/lib/x86_64-linux-gnu/libc.so.6(__libc_dlopen_mode+0x32)[0x7f116bfd91e2]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x1164b)[0x7f116c56e64b]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11834)[0x7f116c56e834]
/lib/x86_64-linux-gnu/libpthread.so.0(__pthread_unwind+0x40)[0x7f116c56cd60]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x8585)[0x7f116c565585]
/usr/lib/speech-dispatcher-modules/sd_baratinoo(+0x5b55)[0x557f47887b55]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x7494)[0x7f116c564494]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f116bfa2acf]

This seems to be caused by d6be18d (not 100% sure yet).

To reproduce:

$ spd-say -o baratinoo --ssml '<speak>La <mark name="3"/>fiche <mark name="9"/>Permissions <mark name="21"/>comporte <mark name="30"/>quatre <mark name="37"/>groupes <mark name="45"/>dโ€™options <mark name="55"/>contenues <mark name="65"/>dans <mark name="70"/>des <mark name="74"/>listes <mark name="81"/>dรฉroulantes. <mark name="94"/>Le</speak>'

You also can notice that before crashing, and when reaching the end of the first sentence, that same sentence starts to get spoken again (repeated), and then it terminates.

I'll investigate further, but would love hints if you got any.

Add an option to generate noise during speech-dispatcher execution

Some headphones or sound system go idle when no sound is produced. This means that the beginning of all sentences are eaten by the wake-up of the sound system.

It would be useful to add an option to speech-dispatcher to permanently generate noise, so as to keep such systems up and running. The volume of the noise should be configurable, to make it loud enough to avoid the system going idle, but silent enough to avoid user headache.

The implementation could be a mere thread started during speech-dispatcher startup, that just generates the noise and pushes it to an instance of audio output.

</speak> pronounced sometimes

Sometimes SpeechDispatcher pronounces "</speak>".
Using Orca, I encounter this problem on some web pages.

Steps to reproduce with Orca:

  1. Open https://www.herboratheque.fr
  2. Go to the top of page
  3. Jump to the second graphic
  4. Go to the next line

After the currency, "</speak>" is pronounced, also after the language.

In the case it could be useful, I checked the checkbox to have the same presentation as it is on screen (mode disposition in French) in Orca preferences for FireFox.

Orca Preferences GUI doesn't give access to all availabls spd modules+config

didier[~]$ ls /usr/lib64/speech-dispatcher-modules/ -1
sd_cicero
sd_dummy
sd_espeak-ng
sd_festival
sd_flite
sd_generic
sd_pico
This list is displayed in the Speech Synthesizer combo box of the Voice tab of Orca's Preference GUI.
Sounds good, but...
I have also mbrola voices installed, and espeak-ng-mbrola-generic does not appear in the list.
I understand that strictly speaking the module in use is sd_generic, however, this prevents using the mbrola voices in Orca.
However if I uncomment this line in /etc/speech-dispatcher/speechd.conf:
#AddModule "espeak-ng-mbrola-generic" "sd_generic" "espeak-ng-mbrola-generic.conf"
then espeak-ng-mbrola-generic is added to the dop down list in Orca Preferences GUI.

As it would probably too complicated to check for each "module" (actually, module setting) listed in speechd.conf in if both the synthesizer and at least one voice file are available, maybe just insert in speechd.conf a comment like:

# To insure that a module be available in Orca, you need to uncomment
# the corresponding line, removing the leading '#'
# Also, the corresponding  synthesizer should be installed as well as
# at least one voice for it.

Maybe that is obvious for everybody but me?

Speaking character should respect capitalization settings applied to speaking strings

Type the following in a python3 console:

import speechd
client = speechd.SSIPClient("foo")
client.set_cap_let_recogn("none")
client.speak("H")
client.char("H")

Expected results: Both speak() and char() would result in "H" being spoken.

Actual results: char() results in "capital H" being spoken, even though the capitalization style is set to none.

If you repeat the above steps, but this time do client.set_cap_let_recogn("spell"), both times speech-dispatcher says "capital H". (As I would expect.)

Impact: Orca now relies upon char() for speaking a single character. As a result, if an Orca user sets their capitalization style to just use pitch, Orca now says "Capital H" (rather than just "H") in the specified pitch. It is my intent to keep using speech-dispatcher's char() to speak single characters because that just seems like the right thing to do. As a result, it would be helpful if speech-dispatcher would apply the capitalization style to char().

Thanks in advance!

Emojis support

Emojis are more and more used and for the moment, they are hard to read for blind users on Linux.
Implementing emojis support on speechDispatcher could allow to have them supported in every voice supported synthesisers.
A begin of solution could be to use http://cldr.unicode.org/ for the emoji list as NVAccess did for NVDA issue 6523 (nvaccess/nvda#6523) as the emojis list.

Baratinoo: selecting a specific voice and then the default voice doesn't come back to default

Hello all,

Tested environment:

  • Debian 9 with speech-dispatcher master and Orca master
  • Debian 8 with speech-dispatcher 0.8 with Hypra's patches and Orca 3.24

Steps to reproduce:

  1. On Orca, select baratinoo as speech synthesizer
  2. Click on the apply button to reload the configuration with Baratinoo voice, by default in French it'll load Philippe
  3. Select Agnes and click on apply, it'll load the Agnes voice
  4. Switch back to "default voice" and click again on apply

Result:
It stays on Agnes

Expected result:
It should keep the default voice, here Philippe in French

Best regards,
Alex.

enhancement: implement a SoundIcon command

please add to config an option like SoundIconsDir in which user will be able to set an fs dir with sound files, so when user will path sound_icon command selected file will be played using selected output method instead passing this directly to output module.

Empty options yield to a warning

Some configuration files have empty options, such as in espeak-generic.conf

GenericPunctNone ""

which yields to

Missing argument to option 'GenericPunctNone'

This is actually an issue in dotconf itself: williamh/dotconf#5

How to change the speed/rate with pico-generic backend?

Hi,

Using Ubuntu 19.10 updated today.

Following the post below, I switched to pico-generic as a backend:
https://webcache.googleusercontent.com/search?q=cache:UWIk0ieinfgJ:https://listengine.tuxfamily.org/lists.tuxfamily.org/carrefourblinux/2011/09/msg00036.html+&cd=9&hl=en&ct=clnk&gl=fr

First, a very strange thing: once the config file changed, speech-dispatcher does not restart.
There was an error in /var/log/speech-dispatcher/pico-generic.log about a connection to pulse that was impossible... I had to swtich to libao, other options failed too.
Absolutely no documentation about what the problem is and how to solve it can be found in the doc... Nor why does it happen with pico-generic and not with the default backend.

I tried to change the rate of playback, with the corresponding DefaultRate variable in speechd.conf but is has no effect. Since I now a way to play a wav file with varying speed, I tried to change the config to use it (./modules/pico-generic.conf):

GenericExecuteSynth \
"pico2wave -w $TMPDIR/pico.wav -l $VOICE \'$DATA\' && mplayer -af scaletempo -speed 1.5 $TMPDIR/pico.wav"

but that does not change a thing, even after 10ths of restart of the speech-dispatcher daemon...

Also, I would have loved to be able to change the $PLAY variable and use a variable indstead of "1.5", but this does not seem to work... Where and how should I overwrite/configure such variables?

Why are all my changes ignored whereas pico is indeed used as a backend now?

Finally, the configuration of the rate thing should be corrected, there are plenty of tools in linux (mplayer/sox) to change the rate of pico's generated file a posteriori...

Get sound as a binary, not output it to speakers

Hi. I use speech dispatcher as a tts server middleware. I mean I give it text, and it creates sound. Its main part is interfaces: one interface to the client and another one to modules. But as I understand, currently there is no interface for getting sound by a client. I mean I give it text, but the sound created should go not to ALSA/PulseAudio, but to one of codecs installed on a host (i.e flac, but a dummy "codec" returning input should be also present) and the result should be returned to the client that has requested the synthesis. It is useful if we wanna use the sound in other way than directly output it to speakers, i.e. we may send it into a browser (it's my use case), or we may record it into a file.

dangling symlinks to es_419/emojis.dic for Latin American Spanish languages when using DESTDIR

Hi,

I've noticed that an installation to a DESTDIR has faulty symlinks pointing to the buildroot instead of the real folder for languages that use emojis from es_419.

The error is in locale/Makefile.am : line 220

$(LN_S) "$(DESTDIR)$(localedatadir)/es_419/emojis.dic" "$(DESTDIR)$(localedatadir)/$$lang419/emojis.dic" || true ;\

should be:
$(LN_S) "$(localedatadir)/es_419/emojis.dic" "$(DESTDIR)$(localedatadir)/$$lang419/emojis.dic" || true ;\

or (even better IMHO):
$(LN_S) "../es_419/emojis.dic" "$(DESTDIR)$(localedatadir)/$$lang419/emojis.dic" || true ;
speechd-20191121-es419-1.txt

Volume/pitch/rate levels are not coherent between voices

For instance, default espeak volume and default flite volume are not comparable.

We would need to take the time to look at coherent volume levels, using e.g. spd-say with the different voices and different --volume values for each voice, to get a coherent set.

Then we would be able to compute scales to compensate differences between syntheses.

Documentation files modified/removed on local repository by make and make clean

After doing make, these documentation files which are part of this repository are modified and so removed after make clean:

  • doc/speech-dispatcher-cs.html
  • doc/speech-dispatcher.html

After make clean, some others are removed:

  • doc/spd-say.html
  • doc/ssip.html

So we have to git checkout/restore these files before each pull.

Speech-dispatcher wouldn't use Festival properly

Hello,

I have Festival set, it works and it also receives and plays sound when server is running:

nc localhost 1314 <<< "(tts_text \"Hello big world, this is a test.\" nil)(quit)"

then configured speech-dispatcher, it failed to properly configure itself via spd-conf, but I manually fixed the configuration filespeechd. To sum it up:

LogDir  "default"
DefaultRate  5
DefaultVolume 100    
DefaultLanguage "en"
DefaultPunctuationMode "all"
AudioOutputMethod "alsa"
AudioALSADevice "default"
AddModule "festival"     "sd_festival"  "festival.conf"
AddModule "dummy"         "sd_dummy"      ""
DefaultModule festival
LanguageDefaultModule "en"  "festival"
Include "clients/*.conf"

Next, ALSA test is working fine (producing sound). However, when I send a text to speech-dispatcher:

spd-say "Hello big world, this is a test."

...the festival server goes crazy, like it was unsuccessfully trying each and every voice it can think of:

SIOD: unknown voice cmu_us_ahw_cg
SIOD: unknown voice cmu_us_aup_cg
SIOD: unknown voice cmu_us_aup_cg
SIOD: unknown voice cmu_us_awb_cg
SIOD: unknown voice cmu_us_awb_cg
SIOD: unknown voice cmu_us_axb_cg
SIOD: unknown voice cmu_us_axb_cg
SIOD: unknown voice cmu_us_bdl_cg
SIOD: unknown voice cmu_us_bdl_cg
SIOD ERROR: could not open file /usr/share/festival/dicts/oald/oaldlex.scm
closing a file left open: /usr/share/festival/voices/english/rab_diphone/festvox/rab_diphone.scm
SIOD: unknown voice rab_diphone
SIOD ERROR: could not open file /usr/share/festival/dicts/oald/oaldlex.scm
closing a file left open: /usr/share/festival/voices/english/rab_diphone/festvox/rab_diphone.scm
SIOD: unknown voice rab_diphone
SIOD: unknown voice cmu_us_kal_com_hts
SIOD: unknown voice cmu_us_kal_com_hts
SIOD: unknown voice cstr_us_ked_timit_hts
SIOD: unknown voice cstr_us_ked_timit_hts
SIOD: unknown voice cmu_us_slt_cg
SIOD: unknown voice cmu_us_slt_cg
SIOD: unknown voice cmu_us_rms_cg
SIOD: unknown voice cmu_us_rms_cg
SIOD: unknown voice cmu_us_awb_cg
SIOD: unknown voice cmu_us_awb_cg
SIOD: unknown voice cmu_us_bdl_cg
SIOD: unknown voice cmu_us_bdl_cg
SIOD ERROR: ran out of storage 
closing a file left open: /usr/share/festival/voices/us/cmu_us_clb_cg//rf_models/trees_08/cmu_us_clb_mcep.tree
SIOD: unknown voice cmu_us_clb_cg
SIOD ERROR: ran out of storage 
closing a file left open: /usr/share/festival/voices/us/cmu_us_clb_cg//festival/trees/cmu_us_clb_mcep.tree
SIOD: unknown voice cmu_us_clb_cg
client(10) Mon Mar 16 22:10:26 2020 : accepted from localhost
SIOD: unknown voice cmu_us_ahw_cg
SIOD: unknown voice cmu_us_aup_cg
SIOD: unknown voice cmu_us_awb_cg
SIOD: unknown voice cmu_us_axb_cg
SIOD: unknown voice cmu_us_bdl_cg
SIOD ERROR: could not open file /usr/share/festival/dicts/oald/oaldlex.scm
closing a file left open: /usr/share/festival/voices/english/rab_diphone/festvox/rab_diphone.scm
SIOD: unknown voice rab_diphone
SIOD: unknown voice cmu_us_kal_com_hts
SIOD: unknown voice cstr_us_ked_timit_hts
SIOD: unknown voice cmu_us_slt_cg
SIOD: unknown voice cmu_us_rms_cg
SIOD: unknown voice cmu_us_awb_cg
SIOD: unknown voice cmu_us_bdl_cg
SIOD ERROR: ran out of storage 
closing a file left open: /usr/share/festival/voices/us/cmu_us_clb_cg//rf_models/trees_08/cmu_us_clb_mcep.tree
SIOD: unknown voice cmu_us_clb_cg
SIOD ERROR: ran out of storage 
closing a file left open: /usr/share/festival/voices/us/cmu_us_jmk_cg//festival/trees/cmu_us_jmk_mcep.tree
SIOD: unknown voice cmu_us_jmk_cg
SIOD: unknown voice cmu_us_ahw_cg
SIOD: unknown voice cmu_us_ahw_cg
SIOD: unknown voice cmu_us_aup_cg
SIOD: unknown voice cmu_us_aup_cg
SIOD: unknown voice cmu_us_awb_cg
SIOD: unknown voice cmu_us_awb_cg

So, festival is working, connection to ALSA is working, speech-dispatcher is sending something to the festival, but it's somehow broken, possibly wrong voice settings.

There is also a configuration file for festival module within /etc/speech-dispatcher/modules/ folder, festival.conf, but it's virtually empty (with a lot of commented text) and it does not mention anything about voices set by speech-dispatcher when calling the Festival. It's a place I would assume one can set that, especially because a comment in speechd.conf:

The DefaultVoiceType controls which voice type should be used by default. Voice types are symbolic names which map to particular voices provided by the synthesizer according to the output module configuration. Please see the synthesizer-specific configuration in etc/speech-dispatcher/modules/ to see which voices are assigned to different symbolic names. The following symbolic names are currently supported: MALE1, MALE2, MALE3, FEMALE1, FEMALE2, FEMALE3, CHILD_MALE, CHILD_FEMALE

# DefaultVoiceType "MALE1"

I also tried to increase heap size up to 50M (as per some posts in other discussions), but it doesn't help:

festival --server --heap 50000000

Any suggestions appreciated.

Versions info:

  • Speech-dispatcher 0.8.8-lp151.3.6.1 (from OpenSUSE Leap 15.1 repositories)
  • Festival 2.5.0-lp151.1.3 (from OpenSUSE Leap 15.1 repositories)
  • festival-freebsoft-utils 0.10-7.el7 (noarch from RL)
  • festvox--arctic-hts 2.5.0-3.fc30 (noarch from FC)
  • OS OpenSUSE Linux Leap 15.1

please implement HISTORY GET MESSAGE id command

I want to create some thing like nvda remote for linux on speechd layer, so I need working HISTORY GET MESSAGE id command to convert id getted from event to message text to send to the client.
I think, it should not be so hard to implement.
I planning to get message id from begin event.

Default voice of module isn't based on locale

Environment:

  • Debian 9.5 "Stretch"
  • Speech Dispatcher RC1 or 0.8

Steps to reproduce:

  1. Configure your computer to have an FR locale for example
  2. Test this command:
    spd-say -o espeak-ng "je suis connectรฉ"

Result:
It is pronounced with it seems the default Espeak NG voice (maybe US voice)

Expected result:
It should be pronounced with the French voice

Another side effect is when you're using Mbrola and you've only installed the French voices it'll try to speak with US voices not installed, so it'll say silent that is annoying for a blind users relying only on speech synthesis.

Best regards,
Alex.

Requested Pulse latency is too low

Fedora 32, speech-dispatcher-0.9.1-6.fc32.x86_64, firefox-78.0.1-1.fc32.x86_64

Opening Discord in Firefox spawns speech-dispatcher, which connects to Pulse for playback. However, the latency it requests (500 us) is too low.

image

I have Pulse running with fixed_latency_range activated (load-module module-udev-detect fixed_latency_range=yes in /etc/pulse/default.pa). I need it because of a rhythm game I play: I want the latency to stay constant and never increase. In practice this isn't a problem for any program (because they request a large enough latency), however 500 us requested by speech-dispatcher immediately breaks the audio in most programs.

Could speech-dispatcher request a higher latency? In practice 1000 us seems to already be enough for my PC and I usually play with ~1315 us latency, which doesn't seem to cause any issues with other programs.

Voxin not listed as synthesizer since commit 7eaca7c

Hello,

context: voxin-3.1rc2 installed on Slint64-14.2.1.2
Until and including commit 667e0c4, spd-say -O lists voxin among the available synthesizers, not after commit 7eaca7c (23 march 2020). But I can't figure out how this change: "Replace AudioPulseServer option with AudioPulseDevice" can trigger this behavior. This is using the same build script, only the source archives differ.

Any clue appreciated.

Didier

src/api/python/speechd_config/config.py.in needs an update

In src/api/python/speechd_config/config.py.in read:

        # Now determine the most important config option
        self.default_output_module = question_with_suggested_answers(
            "Default output module",
            "espeak",
            ["espeak", "flite", "festival", "cicero", "ibmtts"])

We are missing espeak-ng in the list, and I think it should be the default now.
More generally, I assume that this file needs to be put in sync with the modules listed in config/speechd.conf, quoted below:

#AddModule "espeak"       "sd_espeak"   "espeak.conf"
#AddModule "espeak-ng"    "sd_espeak-ng" "espeak-ng.conf"
#AddModule "festival"     "sd_festival"  "festival.conf"
#AddModule "flite"        "sd_flite"     "flite.conf"
#AddModule "ivona"	 "sd_ivona"    "ivona.conf"
#AddModule "pico"        "sd_pico"     "pico.conf"
#AddModule "espeak-generic" "sd_generic" "espeak-generic.conf"
#AddModule "espeak-ng-mbrola-generic" "sd_generic" "espeak-ng-mbrola-generic.conf"
#AddModule "espeak-mbrola-generic" "sd_generic" "espeak-mbrola-generic.conf"
#AddModule "swift-generic" "sd_generic" "swift-generic.conf"
#AddModule "epos-generic" "sd_generic"   "epos-generic.conf"
#AddModule "dtk-generic"  "sd_generic"   "dtk-generic.conf"
#AddModule "pico-generic"  "sd_generic"   "pico-generic.conf"
#AddModule "ibmtts"       "sd_ibmtts"    "ibmtts.conf"
#AddModule "cicero"        "sd_cicero"     "cicero.conf"
#AddModule "kali"        "sd_kali"       "kali.conf"
#AddModule "mary-generic" "sd_generic"   "mary-generic.conf"
#AddModule "baratinoo" "sd_baratinoo"   "baratinoo.conf"

EDIT: Another way could be to list only the modules actually installed, but maybe that's not easy to find where they have been installed.

Add a new parameter to set the Speech-Dispatcher volume to the system one

Hello,

Today I've helped a user who has had his system to a no speech state due to some PulseAudio adjustment made by VLC to increase audio book volume.

To avoid the user to be in a mute Speech Dispatcher state, I propose to create a new configuration parameter called "UseSystemVolume" that will automatically set the Speech Dispatcher volume in PulseAudio to the system level at startup.

Best regards.

No way to configure SSML parsing to on for speech-dispatcher-espeak-ng

Related #1

Attempting to set SSML parsing to on by default for Web Speech API at browsers 1) installed python3-speechd, removed espeak-ng, espeak, cloned espeak-ng from GitHub and added | espeakSSML to L344 at espeak-ng.c, built, installed and verified the installation

$ find /usr/* | grep libespeak-ng
/usr/lib/libespeak-ng.a
/usr/lib/libespeak-ng.la
/usr/lib/libespeak-ng.so
/usr/lib/libespeak-ng.so.1
/usr/lib/libespeak-ng.so.1.1.51
/usr/share/doc/libespeak-ng1
/usr/share/doc/libespeak-ng1/changelog.Debian.gz
/usr/share/doc/libespeak-ng1/copyright

and the output of the change to the source file

$ espeak-ng <speak>test</speak> parses the SSML by default, without passing -m flag, $ spd-say <speak>test</test> does not parse SSML without -x flag.

Looking into the source code further speech-dispatcher-espeak-ng package installs the file /usr/lib/speech-dispatcher-modules/sd_espeak-ng, which appears to be a self-contained version of espeak-ng unrelated to the version installed from the repository.

Kindly provide the steps necessary to either a) set the speech synthesis engine to the user selected local speech synthesis engine, instead of the file shipped with speech-dispatcher-espeak-ng; or b) set SSML parsing to on during $ spd-conf prompts and in ~/.config/speech-dispatcher/modules/espeak-ng.conf directly.

Required libraries not installed

sd_kali requires: libKali.so, libKGlobal.so, libKTrans.so, libKParle.so, libKAnalyse.so
sd_baratinoo requires: libbaratinoo.so
sd_ibmtts requires: libibmeci.so
but these libraries are not installed.

How to install using git repository?

Getting errors following instructions at INSTALL

./configure: line 12297: syntax error near unexpected token `0.40.0'
./configure: line 12297: `IT_PROG_INTLTOOL(0.40.0)'
~/speechd-master$ make all
make: *** No rule to make target 'all'.  Stop.

Should support multilingualization between modules

SSML defines language and voice tags to switch voices dynamically within text, paragraph, and even sentences. Modules can support on-the-fly language/voice change by themselves, but if a module does not support the requested language, we need to switch to another module. In that case speech-dispatcher needs to split the ssml content to feed it to different modules.

[Feature Request] Consider Normalizing Text Data Prior to Screen-Reading

Problem

It is increasingly common for users on social media and websites to stylize text with Unicode homoglyphs, often to stylize text. All of these examples are completely illegible to users of a screen-reader, and make actually reading social media content difficult.

For example, with the desired message:

  • "Sample text."

We can have users report all of the following outputs, and more:

  • "๏ผณ๏ฝ๏ฝ๏ฝ๏ฝŒ๏ฝ…ใ€€๏ฝ”๏ฝ…๏ฝ˜๏ฝ”๏ผŽ"
  • "โ“ˆโ“โ“œโ“Ÿโ“›โ“”ใ€€โ“ฃโ“”โ“งโ“ฃ๏ผŽ"
  • "๐“ข๐“ช๐“ถ๐“น๐“ต๐“ฎ ๐“ฝ๐“ฎ๐”๐“ฝ."
  • "๐”–๐”ž๐”ช๐”ญ๐”ฉ๐”ข ๐”ฑ๐”ข๐”ต๐”ฑ."
  • "๐‘บ๐’‚๐’Ž๐’‘๐’๐’† ๐’•๐’†๐’™๐’•."
  • "๐•Š๐•’๐•ž๐•ก๐•๐•– ๐•ฅ๐•–๐•ฉ๐•ฅ."

Potential Solution

A relatively naive, but fairly robust solution is to do Unicode NFKD or NFKC normalization on all inputs. NFKD stands for Normalization Form Compatibility Decomposition, without any recomposition. NFKC stands for Normalization Form Compatibility Decomposition, Followed by Canonical Composition.

In Python, this is fairly trivial:

import unicodedata

def normalize(text):
    return unicodedata.normalize('NFKD', text)

For all of the above examples I showed you, it works fairly well:

>>> normalize("๐•Š๐•’๐•ž๐•ก๐•๐•– ๐•ฅ๐•–๐•ฉ๐•ฅ.")
'Sample text.'
>>> normalize("๏ผณ๏ฝ๏ฝ๏ฝ๏ฝŒ๏ฝ…ใ€€๏ฝ”๏ฝ…๏ฝ˜๏ฝ”๏ผŽ")
'Sample text.'
>>> normalize("โ“ˆโ“โ“œโ“Ÿโ“›โ“”ใ€€โ“ฃโ“”โ“งโ“ฃ๏ผŽ")
'Sample text.'
>>> normalize("๐“ข๐“ช๐“ถ๐“น๐“ต๐“ฎ ๐“ฝ๐“ฎ๐”๐“ฝ.")
'Sample text.'
>>> normalize("๐”–๐”ž๐”ช๐”ญ๐”ฉ๐”ข ๐”ฑ๐”ข๐”ต๐”ฑ.")
'Sample text.'
>>> normalize("๐‘บ๐’‚๐’Ž๐’‘๐’๐’† ๐’•๐’†๐’™๐’•.")
'Sample text.'
>>> normalize("๐•Š๐•’๐•ž๐•ก๐•๐•– ๐•ฅ๐•–๐•ฉ๐•ฅ.")
'Sample text.'

Downsides

There are various situations where this may mildly change the semantics of the input text:

>>> '\ufb01'
'๏ฌ'
>>> normalize('\ufb01')
'fi'

However, in other cases it should be fine:

>>> 'dรฆmon'
'dรฆmon'
>>> normalize('dรฆmon')
'dรฆmon'

Other issues could occur with the use of confusable homoglyphs:

>>> normalize('๐•€๐•’๐•ฅ๐•–')
Iate

Obviously, the desired result would be late, however, without a hopelessly complex semantic analysis, this is the best we can do, with minimal boilerplate.

Current Attempts

There have been previous attempts at fixing this, for example, through the introduction of font-variants.dic, and has been addressed in #217. However, not only would Unicode NFKC/NFKD normalization solve all these issues, it would also address many homoglyph variants that currently aren't addressed by the above

Normalization Options

Please note that NFC normalization is considered recommended by W3C (for the web), but screen readers should go one step further. NFC and NFD normalization do not work with the aforementioned styles, and we should have minimal issues with correctness or semantics.

Other Issues

Some Unicode blocks may be commonly used for "stylizing" text but also have significant meaning by mathematicians outside this context. It might be a good idea to ignore characters from certain Unicode blocks, or at the very least provide an option to, since this may hinder the reading of mathematical expressions.

speech-dispatcher crashes on startup with python 3.9

I noticed that orca is broken in Fedora 33; it starts up "successfully," but never says anything. It's caused by this python crash:

12:48:10.345242 - SPEECH: Initializing

Traceback (most recent call last):
  File "/usr/lib/python3.9/site-packages/orca/speechdispatcherfactory.py", line 161, in __init__
    self._init()
  File "/usr/lib/python3.9/site-packages/orca/speechdispatcherfactory.py", line 172, in _init
    self._client = client = speechd.SSIPClient('Orca', component=self._id)
  File "/usr/lib64/python3.9/site-packages/speechd/client.py", line 578, in __init__
    self._initialize_connection(user, name, component)
  File "/usr/lib64/python3.9/site-packages/speechd/client.py", line 601, in _initialize_connection
    self._conn.send_command('SET', Scope.SELF, 'CLIENT_NAME', full_name)
  File "/usr/lib64/python3.9/site-packages/speechd/client.py", line 326, in send_command
    code, msg, data = self._recv_response()
  File "/usr/lib64/python3.9/site-packages/speechd/client.py", line 292, in _recv_response
    if not self._communication_thread.isAlive():
AttributeError: 'Thread' object has no attribute 'isAlive'

12:48:10.395146 - ERROR: Speech Dispatcher service failed to connect
12:48:10.395666 - SPEECH: Not available
12:48:10.395689 - SPEECH: Initialized

Python 3.9 release notes say:

The isAlive() method of threading.Thread has been removed. It was deprecated since Python 3.8. Use is_alive() instead. (Contributed by Dong-hee Na in bpo-37804.)

Some snarky comment about backwards-compat seems appropriate. Anyway, it looks like Thread.is_alive() has been available since python 2.6, so there should be no risk in changing that.

With Speech Dispatcher 0.9RC2, it is not possible to list voices in certain circonstances

Hello all,

Environment:

  • Debian Stretch or Debian Buster
  • Orca 3.30.1 from backports or Orca from Debian experimental
  • Speech Dispatcher 0.9RC2 or Speech Dispatcher from Debian experimantal

Steps to reproduce:

  1. Launch Orca with this command
    killall speech-dispatcher; speech-dispatcher; orca --replace
  2. Go to voice tab
  3. Check the available list

Result:
List or voices is empty

Expected result:
List of voices should be available

This issue seems random on the computer of one my colleague, on my computer, it's always present.

Best regards,
Alex.

Disturbs audio during startup

It has been reported (though apparently never directly to speechd...) that when speech-dispatcher starts, the system audio gets disturbed. That can be heard by starting speech-dispatcher while some sound is playing.

Cannot set SpeechSynthesisVoice to female voice variant when espeak is the default speech synthesis module

Steps to reproduce the problem:

  1. Execute window.speechSynthesis.getVoices()
  2. Filter SpeechSynthesisVoice objects and select a SpeechSynthesisVoice object where "female" is included in SpeechSynthesisVoice "name" attribute

What is the expected behavior?
The selected SpeechSynthesisVoice object "name" attribute should correspond to the name of the voice variant expected by espeak

What went wrong?
The female voice variant is not selected, the male voice variant is output as audio

espeak expects voice variant name to be e.g., "female1" or "english+f1" not "english+female1 espeak" at Chromium https://bugs.chromium.org/p/chromium/issues/detail?id=811160 or "english+female1" at Firefox https://bugzilla.mozilla.org/show_bug.cgi?id=1437422.

<!DOCTYPE html>
<html>
ย 
<head>
<title>Cannot set SpeechSynthesisVoice to female voice variant when espeak is the default speech synthesis module</title>
<script>
window.speechSynthesis.cancel();
var text = "hello universe";
ย 
var handleVoices = () => {
window.speechSynthesis.onvoiceschanged = null;
voices = voices \|\| window.speechSynthesis.getVoices();
// filter female voice variants
voices = voices.filter(({
name: voiceName
}) => /^en-\|english/.test(voiceName) && /female/.test(voiceName));
// select "english" "female1" voice variant
var voice = voices.find(({name: voiceName}) => /^english\+female1/.test(voiceName));
console.log(voices, voice);
const utterance = new SpeechSynthesisUtterance();
utterance.text = text;
utterance.lang = "en-US";
utterance.voice = voice;
// `espeak` expects voice variant name to be e.g., `"female1"` or `"english+f1"`
// not `"english+female1 espeak"` at Chromium
// or `"english+female1"` at Firefox
console.log(utterance.voice.name);  //
window.speechSynthesis.speak(utterance);
ย 
}
window.speechSynthesis.onvoiceschanged = handleVoices;
var voices = window.speechSynthesis.getVoices();
if (voices.length) handleVoices();
ย 
</script>
</head>
<body>
</html>

Consider Weblate as translation platform

In my experience, free software project doesn't get much translators to contribute when they are required to look in the source code of the project to generate or to update the PO file. (I'm an exception, though)

For this reason, I would like to suggest you to consider using a translation platform. As it is an environment that normally voluntaries translate software, it would be easier to get more attention to this project localization.

I feel Weblate is a great option, as it is gratis for free software, requiring only to have an account be created in Hosted Weblate instance and a request be submitted for the free software to be added. See this page for this information and then this form (must be logged into Hosted Weblate)

How to set SSML parsing to on at user configuration file?

Presently Chromium browser does not provide a means to set x option to parse SSML when calling window.speechSynthesis.speak(), see Implement SSML parsing at SpeechSynthesisUtterance when --enable-speech-dispatcher flag is set.

Have created a user configuration file using spd-conf -u though have not located documentation or code relevant to setting SSML parsing on by default for spd-say or espeak m option when espeak is set as default module.

How to set SSML parsing to on for all unix socket connections from Chromium browser?

Add support for Mimic

Mimic is a compact speech synthesis engine developed as part of the Mycroft project, based on Flite, and largely compatible with it from the command line interface.

An issue already exist at Mycroft to add speech-dispatcher support for Mimic, but it doesn't seem to have been worked on. I thought it would be at least beneficial to raise it here for mutual awareness.

Support for gender sensitive language

There have now been two bugs filed against Orca for this feature:

As I've stated in the first of the above:

I'm not sure how I can solve this in Orca. Orca doesn't have a dictionary in which it stores all possible words for all languages which might need to be pronounced in a different fashion. Instead, gets strings (e.g. from web page content, LibreOffice documents, email messages, etc., etc.). Then it sends those strings to speech-dispatcher which in turn sends them to language-specific speech synthesizers (e.g. espeak, voxin, etc.). Thus I think the place to fix this would either be in speech-dispatcher or the synthesizers.

I stated something similar in the second of the above:

I think is important to keep in mind is that a screen reader's job is to present what is on the screen. If "nosotrxs" is on the screen, I do not think the screen reader (i.e. Orca) should modify that text turning it into, say, "nosotros" or "nosotros o nosotras" or anything else.

If an optional layer is called for, perhaps it could live in Speech Dispatcher so other tools which include text-to-speech functionality (such as the NVDA screen reader) could also benefit from it?

Thoughts?

Support socket activation

It would be really useful to have Socket activation support so that speechd gets launched automatically when a client connects to the socket.

See #335 (comment) for more context.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.