Giter VIP home page Giter VIP logo

asyncbigtable's Introduction

AsyncBigtable

Travis CI status Maven Stack Overflow

This is an HBase library intended to work as a drop in replacement for the fantastic AsyncHBase library and integrate OpenTSDB with Google Bigtable. It is using the Apache HBase 1.0 API linking the Google Bigtable libraries.

This library started out as a fork of the asynchbase 1.5.0 library, therefore one may find code that at first sight may look strange. We are working on cleaning up the code base and removing irrelevant dependencies.

Basic Installation

Contrary to the original asynchbase library, asyncbigtable has adopted Maven as the building tool for this project.

To produce the jar file, simply run:

mvn clean package

Maven will produce the following two jar files under the target/ directory:

  1. asyncbigtable-<version>.jar which is the compiled jar file
  2. asyncbigtable-<version>-jar-with-dependencies.jar which is an assembly jar containing all dependencies required for asyncbigtable to run with OpenTSDB. Please note that this jar does not include all dependencies but only the ones for OpenTSDB.

Javadoc

Since AsyncBigtable tries to be 100% compatible with AsyncHBase, please read the AsyncHBase javadoc.

Integration Tests

Integration tests can be run as follows:

mvn clean verify -Pbt-integration-test -Dgoogle.bigtable.project.id=<projectId> -Dgoogle.bigtable.instance.id=<instanceId> 

Changelog

This project uses Semantic Versioning.

0.4.3

  • Updated dependency to com.google.cloud.bigtable:bigtable-hbase-2.x-hadoop:1.23.0
  • Added FilterList.size() to fix compatibility with asynchbase.

0.4.2

  • Updated dependency to com.google.cloud.bigtable:bigtable-hbase-2.x-hadoop:1.9.0

0.4.1

  • Updated dependency to com.google.cloud.bigtable:bigtable-hbase-2.x-hadoop:1.8.0

0.4.0

  • Updated dependency to com.google.cloud.bigtable:bigtable-hbase-2.x:1.4.0
  • Updated Java version to 1.8
  • Implemented true async support using HBase 2.0 async client API
  • Fixed concurrency issues
  • Added integration tests
  • Many more changes, since last release was long time ago

0.3.0

  • Updated dependency to com.google.cloud.bigtable:bigtable-hbase-1.2:0.9.6
  • Updated dependency to protobuf-java:3.0.2
  • Updated dependency to netty-all 4.1.0.Final

0.2.1

  • This is the first public release of the asyncbigtable library, that started out as a fork of the asynchbase 1.5.0 library.
  • Modified all HBase access operations to use the standard HBase 1.0 API calls
  • Added Google Bigtable 0.2.2 dependency
  • Changed project build system from Make to Maven
  • Added assembly Maven plugin to build uber jar for distribution with OpenTSDB

Disclaimer

Please note that the library is still under development and it was never meant to replace AsyncHBase when running with an HBase backend or be a general purpose HBase library.

asyncbigtable's People

Contributors

adityakishore avatar b4hand avatar bdd avatar csoulios avatar danburkert avatar dvdreddy avatar goll avatar james-at-dataxu avatar jesse5e avatar manolama avatar mbrukman avatar octo47 avatar overthetop avatar phs avatar saintstack avatar sduskis avatar shrijeetrf avatar spollapally avatar tsuna avatar varundhussa avatar xunl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

asyncbigtable's Issues

Build a new version of Bigtable library

Hi @csoulios, since you've updated the version of the Bigtable client library in to address issue #16, could you please build a new version of the library and update OpenTSDB library deps accordingly?

The latest version on Maven Central is currently 0.1.0 from Oct 2015, which is quite out-of-date.

Thanks!

Prisma vulnerabilities reported on asyncbigtable 0.4.3

Hi - we are using version 0.4.3 in our application using OpenTSDB. We ran a Prisma scan against it and vulnerabilities were found with regards to asyncbigtable-0.4.3-jar-with-dependencies.jar. Can we attach the Prisma report here for your reference? Or there is some other process/email/mailing list to report vulnerabilities?

BigTable API: CompletableFuture failed with exception

I was getting this error from time to time when I was doing some highly concurrent writes using the driver. I'm using a single instance of the HBaseClient from many threads that are all doing AtomicIncrementRequest. I'm also flushing the client in a timer only from the main thread. In the BigTable dashboard I saw that I was doing like 800 write req/s on a development type instance.

  1. is HBaseClient thread safe?
  2. if 1) is yes. I see that the com.google.cloud.bigtable.hbase2_x.FutureUtils class doesn't provide executor when calling toCompletableFuture() method and that might be the issue as well.
    What do you guys think is the problem here and how to resolve it?

ERROR [org.hbase.async.HBaseClient] BigTable API: CompletableFuture failed with exception java.util.concurrent.CompletionException: io.grpc.StatusRuntimeException: ABORTED: ReadModifyWrite failed: too many concurrent modifications. at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292) at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308) at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593) at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577) at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977) at com.google.cloud.bigtable.hbase2_x.FutureUtils$2.onFailure(FutureUtils.java:51) at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1228) at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:399) at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:911) at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:822) at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:686) at com.google.cloud.bigtable.grpc.async.AbstractRetryingOperation$GrpcFuture.setException(AbstractRetryingOperation.java:119) at com.google.cloud.bigtable.grpc.async.AbstractRetryingOperation.setException(AbstractRetryingOperation.java:271) at com.google.cloud.bigtable.grpc.async.AbstractRetryingOperation.onError(AbstractRetryingOperation.java:222) at com.google.cloud.bigtable.grpc.async.AbstractRetryingOperation.onClose(AbstractRetryingOperation.java:187) at com.google.cloud.bigtable.grpc.io.ChannelPool$InstrumentedChannel$2.onClose(ChannelPool.java:210) at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:41) at com.google.cloud.bigtable.grpc.io.RefreshingOAuth2CredentialsInterceptor$UnAuthResponseListener.onClose(RefreshingOAuth2CredentialsInterceptor.java:85) at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:41) at io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:684) at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:41) at io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:391) at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:475) at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:557) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:478) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:590) at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: io.grpc.StatusRuntimeException: ABORTED: ReadModifyWrite failed: too many concurrent modifications. at io.grpc.Status.asRuntimeException(Status.java:517) ... 19 more

Update README to clarify module release history

The README currently says:

Changelog

This project uses Semantic Versioning.

0.3.0

[...]

0.2.1

[...]

However, it doesn't appear that v0.2.1 was ever released or published. If we search on Maven Central, the only version of com.pythian.opentsdb that appears is v0.1.0, there's no mention of v0.2.1 .

So, is the "0.2.1" heading here supposed to be changed to 0.1.0 instead? Or was there a release of 0.2.1 somewhere that's not obvious?

I believe 0.2.1 was at some point added to pom.xml as a version, it might have not just been built and released, so perhaps there was another 0.1.0 release prior to that?

On the other hand, OpenTSDB depends on asyncbigtable 0.2.1-20160228.235952-3, but that version isn't listed as a version of com.pythian.opentsdb/asyncbigtable, so it's confusing as to where it comes from.

Would be nice to clarify the history of this module for clarity:

  • add v0.1.0 with features, date of release, etc.
  • clarify where v0.2.x was released and how

@csoulios, I'm guessing you're the expert on the history of this module, so would be great to get an authoritative record from you.

DeleteRequest is not working v.0.4.0

Hey guys,
I'm trying to delete a row with a simple DeleteRequest but the data never gets deleted :(
The table is set with gc policy: versions() > 1
Create, Update and Read operations work as expected, but Delete is not working.
And the addErrback() on the deferred is never called.

DeleteRequest del = new DeleteRequest(TABLE, KEY, FAMILY); Deferred<Object> callback = CLIENT.delete(del); callback.addErrback(eb -> { System.out.println("errback: " + eb.toString()); return null; }); CLIENT.flush().join(); callback.join();

When I read the data after that operation, it's still there.
Please, help me out.

Implement Row Filter for BigTable

com.google.cloud.bigtable.hbase.adapters.filters.UnsupportedFilterException: Unsupported filters encountered: FilterSupportStatus{isSupported=false, reason='Don't know how to adapt Filter class 'class org.apache.hadoop.hbase.filter.RowFilter''}

Delete entire row function doesn't work

Found while trying to use the API in opentsdb to delete some timeseries. The code in question for opentsdb is: https://github.com/OpenTSDB/opentsdb/blob/master/src/core/TsdbQuery.java#L781

final DeleteRequest del = new DeleteRequest(tsdb.dataTable(), key);

The result of System.out.println(del):

DeleteRequest(table="tsdb", key=[0, 0...], family="", qualifiers=[""], attempt=0, region=)

The function in asyncbigtable is:

https://github.com/OpenTSDB/asyncbigtable/blob/master/src/main/java/org/hbase/async/DeleteRequest.java#L69

  public DeleteRequest(final byte[] table, final byte[] key) {
    this(table, key, null, null, KeyValue.TIMESTAMP_NOW, RowLock.NO_LOCK);
  }

Exception from asyncbigtable

2017-11-14 13:02:03,372 ERROR [OpenTSDB I/O Worker #5] BigtableTable: Encountered exception when executing delete.
com.google.bigtable.repackaged.io.grpc.StatusRuntimeException: INVALID_ARGUMENT: Error in field 'Mutation list' : Error in element #0 : Invalid id for collection columnFamilies : Length should be between [1,64], but found 0 ''
at com.google.bigtable.repackaged.io.grpc.Status.asRuntimeException(Status.java:503) ~[asyncbigtable-0.2.2-SNAPSHOT-jar-with-dependencies.jar:na]
at com.google.cloud.bigtable.grpc.BigtableDataGrpcClient.getBlockingResult(BigtableDataGrpcClient.java:396) ~[asyncbigtable-0.2.2-SNAPSHOT-jar-with-dependencies.jar:na]
at com.google.cloud.bigtable.grpc.BigtableDataGrpcClient.getBlockingUnaryResult(BigtableDataGrpcClient.java:358) ~[asyncbigtable-0.2.2-SNAPSHOT-jar-with-dependencies.jar:na]
at com.google.cloud.bigtable.grpc.BigtableDataGrpcClient.mutateRow(BigtableDataGrpcClient.java:255) ~[asyncbigtable-0.2.2-SNAPSHOT-jar-with-dependencies.jar:na]
at com.google.cloud.bigtable.hbase.BigtableTable.delete(BigtableTable.java:347) ~[asyncbigtable-0.2.2-SNAPSHOT-jar-with-dependencies.jar:na]
at org.hbase.async.HBaseClient.delete(HBaseClient.java:1228) [asyncbigtable-0.2.2-SNAPSHOT-jar-with-dependencies.jar:na]
at net.opentsdb.core.TsdbQuery$1ScannerCB.processRow(TsdbQuery.java:782) [tsdb-2.3.0.jar:b81ef90]
at net.opentsdb.core.TsdbQuery$1ScannerCB.call(TsdbQuery.java:750) [tsdb-2.3.0.jar:b81ef90]
at net.opentsdb.core.TsdbQuery$1ScannerCB.call(TsdbQuery.java:575) [tsdb-2.3.0.jar:b81ef90]
at com.stumbleupon.async.Deferred.doCall(Deferred.java:1278) [async-1.4.0.jar:na]
at com.stumbleupon.async.Deferred.addCallbacks(Deferred.java:688) [async-1.4.0.jar:na]
at com.stumbleupon.async.Deferred.addCallback(Deferred.java:724) [async-1.4.0.jar:na]
at net.opentsdb.core.TsdbQuery$1ScannerCB.scan(TsdbQuery.java:616) [tsdb-2.3.0.jar:b81ef90]
at net.opentsdb.core.TsdbQuery.findSpans(TsdbQuery.java:878) [tsdb-2.3.0.jar:b81ef90]
at net.opentsdb.core.TsdbQuery.runAsync(TsdbQuery.java:515) [tsdb-2.3.0.jar:b81ef90]
<snip>

Adding a family argument to the call makes the resulting request:

DeleteRequest(table="tsdb", key=[0, 0...], family="t", qualifiers=[""], attempt=0, region=)

Which does not throw the Exception however no rows are deleted as qualifiers is empty

Adding valid qualifiers to the argument deletes the cells as expected.

DeleteRequest(table="tsdb", key=[0, 0...], family="t", qualifiers=["\x00"], attempt=0, region=)

bigtable-hbase-2.x-hadoop-1.9.0 and JDK 11 issue

It seems that java 11 is not supported. I get that error when I try to read data from bigtable:

javax.net.ssl.SSLHandshakeException: General OpenSslEngine problem at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslContext$AbstractCertificateVerifier.verify(ReferenceCountedOpenSslContext.java:629) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.internal.tcnative.SSL.readFromSSL(Native Method) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.readPlaintextData(ReferenceCountedOpenSslEngine.java:511) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1060) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1169) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler$SslEngineType$1.unwrap(SslHandler.java:211) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1297) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1199) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1243) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: java.lang.ClassCastException: class com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.OpenSslEngine cannot be cast to class sun.security.ssl.SSLEngineImpl (com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.OpenSslEngine is in unnamed module of loader 'app'; sun.security.ssl.SSLEngineImpl is in module java.base of loader 'bootstrap') at java.base/sun.security.ssl.SSLAlgorithmConstraints.<init>(SSLAlgorithmConstraints.java:132) at java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:267) at java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:141) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslClientContext$ExtendedTrustManagerVerifyCallback.verify(ReferenceCountedOpenSslClientContext.java:237) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslContext$AbstractCertificateVerifier.verify(ReferenceCountedOpenSslContext.java:625) ... 26 more } on channel 22. Trailers: Metadata(bigtable-channel-id=22)"} javax.net.ssl.SSLHandshakeException: General OpenSslEngine problem at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslContext$AbstractCertificateVerifier.verify(ReferenceCountedOpenSslContext.java:629) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.internal.tcnative.SSL.readFromSSL(Native Method) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.readPlaintextData(ReferenceCountedOpenSslEngine.java:511) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1060) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1169) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler$SslEngineType$1.unwrap(SslHandler.java:211) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1297) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1199) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1243) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: java.lang.ClassCastException: class com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.OpenSslEngine cannot be cast to class sun.security.ssl.SSLEngineImpl (com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.OpenSslEngine is in unnamed module of loader 'app'; sun.security.ssl.SSLEngineImpl is in module java.base of loader 'bootstrap') at java.base/sun.security.ssl.SSLAlgorithmConstraints.<init>(SSLAlgorithmConstraints.java:132) at java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:267) at java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:141) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslClientContext$ExtendedTrustManagerVerifyCallback.verify(ReferenceCountedOpenSslClientContext.java:237) at com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslContext$AbstractCertificateVerifier.verify(ReferenceCountedOpenSslContext.java:625) ... 26 more

I believe that this is because the bigtable driver uses old version of netty which does not support java 11. Am I right? Btw, the code runs fine on java 8.

Update version of Bigtable client library from 0.9.6

The current Bigtable dependencies in pom.xml are as follows:

<dependency>
  <groupId>com.google.cloud.bigtable</groupId>
  <artifactId>bigtable-hbase-1.2</artifactId>
  <version>0.9.6</version>
</dependency>

<dependency>
  <groupId>io.netty</groupId>
  <artifactId>netty-tcnative-boringssl-static</artifactId>
  <version>1.1.33.Fork26</version>
</dependency>

This should be updated to either 0.9.7.x series or straight to 1.0.0 series (current latest version is 1.0.0-pre2; we can upgrade now or wait until 1.0.0 final is released).

@sduskis, @igorbernstein — what do you think?

Update asyncbigtable to 0.3.2 with 1.0.0 Bigtable client library

As per @sduskis latest commits we now have the final 1.0.0 Bigtable client library dependency in pom.xml:

<dependency>
  <groupId>com.google.cloud.bigtable</groupId>
  <artifactId>bigtable-hbase-1.x</artifactId>
  <version>1.0.0</version>
</dependency>

Can we now properly close #24 by releasing an updated 0.3.2 asyncbigtable?

Currently the asyncbigtable version in OpenTSDB 2.4.0RC2 is broken (only 0.3.0 is uploaded):
https://github.com/OpenTSDB/opentsdb/blob/v2.4.0RC2/third_party/asyncbigtable/include.mk#L16

cc: @tsuna @mbrukman @manolama

PutRequest without calling flush() on the client

Hey guys,
maybe I'm missing something but I can't create a single PutRequest without calling flush on the client before that. If I don't call flush, the put request is blocked and never finishes.

Deferred<Object> result = client.put(new PutRequest(...)); result.join()

If I do only this, the call never finishes. So should I call flush to batch myself?

HBaseClient.getBufferedMutator isn't thread safe.

The mutators map is defined as:

ConcurrentHashMap<TableName, AsyncBufferedMutator> mutators

Here's getBufferedMutators():

  private AsyncBufferedMutator getBufferedMutator(TableName table) {
    AsyncBufferedMutator mutator = mutators.get(table);

    if (mutator == null) {
      synchronized (mutators) {
        mutator = hbase_asyncConnection.getBufferedMutator(table);
        mutators.put(table, mutator);
      }
    }

    return mutator;
  }

This logic won't work, since there's a race condition. Either use a regular HashMap, and synchronize around it, or use putIfAbsent in the right way.

Update version of Cloud Bigtable client library

The current version of Cloud Bigtable client used is 0.2.2-20151029.111903-12 which is quite old — there have been many newer versions of the client library released since then, and the performance has been improved significantly.

These release notes show the history of the Cloud Bigtable client releases.

The pom.xml file should be updated to use the latest version, and a new release of this module will enable OpenTSDB to use it directly without any additional work from the users.

cc: @csoulios, @sduskis

Add FilterList size method to make it compatible with AsyncHBase

The size method returns the number of filters in the filter list. Since this isn't implemented by AsyncBigtable, it breaks the following test cases when building OpenTSDB:

../test/core/TestTsdbQueryQueries.java:1712: error: cannot find symbol
assertEquals(2, filter_list.size());
^
symbol: method size()
location: variable filter_list of type FilterList
../test/core/TestTsdbQueryQueries.java:1747: error: cannot find symbol
assertEquals(2, filter_list.size());
^
symbol: method size()
location: variable filter_list of type FilterList
../test/core/TestTsdbQueryQueries.java:1777: error: cannot find symbol
assertEquals(2, filter_list.size());

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.