Connecting to MINIO Operator/Tenant via SSL

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Connecting to MINIO Operator/Tenant via SSL

Robert Cullen

The new MINIO operator/tenant model requires connection over SSL. I’ve added the 3 public certs that MINIO provides using keytool to the truststore and passing the JVM params via command line to flink as follows:

root@flink-client:/opt/flink# ./bin/flink run --detached --target kubernetes-session -Dkubernetes.cluster-id=flink-jobmanager -Dkubernetes.namespace=flink -Djavax.net.ssl.trustStore=$JAVA
_HOME/lib/security/cacerts -Djavax.net.ssl.trustStorePassword=changeit ./usrlib/flink-job.jar

But there is still an SSL validation error:


2021-05-18 13:01:53,635 DEBUG com.amazonaws.auth.AWS4Signer                                [] - AWS4 String to Sign: '"AWS4-HMAC-SHA256
20210518T130153Z
20210518/us-east-1/s3/aws4_request
b38391c7efd22a9ed0bceb93d460732bdd632f2acccf7e9d2d1baa30be69ced2"
2021-05-18 13:01:53,636 DEBUG com.amazonaws.http.conn.ssl.SdkTLSSocketFactory              [] - connecting to /10.42.0.133:9000
2021-05-18 13:01:53,636 DEBUG com.amazonaws.http.conn.ssl.SdkTLSSocketFactory              [] - Connecting socket to /10.42.0.133:9000 with timeout 5000
2021-05-18 13:01:53,636 DEBUG com.amazonaws.http.conn.ssl.SdkTLSSocketFactory              [] - Enabled protocols: [TLSv1.2]
2021-05-18 13:01:53,636 DEBUG com.amazonaws.http.conn.ssl.SdkTLSSocketFactory              [] - Enabled cipher suites:[TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384, TLS_DHE_RSA_WITH_AES_256_GCM_SHA384, TLS_DHE_DSS_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256, TLS_DHE_RSA_WITH_AES_128_GCM_SHA256, TLS_DHE_DSS_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA256, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
2021-05-18 13:01:53,636 DEBUG com.amazonaws.http.conn.ssl.SdkTLSSocketFactory              [] - socket.getSupportedProtocols(): [TLSv1.3, TLSv1.2, TLSv1.1, TLSv1, SSLv3, SSLv2Hello], socket.getEnabledProtocols(): [TLSv1.2]
2021-05-18 13:01:53,636 DEBUG com.amazonaws.http.conn.ssl.SdkTLSSocketFactory              [] - TLS protocol enabled for SSL handshake: [TLSv1.2, TLSv1.1, TLSv1]
2021-05-18 13:01:53,636 DEBUG com.amazonaws.http.conn.ssl.SdkTLSSocketFactory              [] - Starting handshake
2021-05-18 13:01:53,638 DEBUG com.amazonaws.http.conn.ClientConnectionManagerFactory       [] - 
java.lang.reflect.InvocationTargetException: null
    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source) ~[?:?]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_292]
    at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_292]
    at com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at com.amazonaws.http.conn.$Proxy49.connect(Unknown Source) ~[?:1.13.0]
    at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1330) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5062) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5008) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1416) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1352) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExists$1(S3AFileSystem.java:373) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:256) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:231) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:372) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:308) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.flink.fs.s3.common.AbstractS3FileSystemFactory.create(AbstractS3FileSystemFactory.java:123) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.flink.core.fs.PluginFileSystemFactory.create(PluginFileSystemFactory.java:62) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:506) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:407) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at org.apache.flink.core.fs.Path.getFileSystem(Path.java:274) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at org.apache.flink.runtime.state.filesystem.FsCheckpointStorageAccess.<init>(FsCheckpointStorageAccess.java:64) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at org.apache.flink.runtime.state.storage.FileSystemCheckpointStorage.createCheckpointStorage(FileSystemCheckpointStorage.java:323) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:321) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:240) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.enableCheckpointing(DefaultExecutionGraph.java:448) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at org.apache.flink.runtime.executiongraph.DefaultExecutionGraphBuilder.buildGraph(DefaultExecutionGraphBuilder.java:311) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.createAndRestoreExecutionGraph(DefaultExecutionGraphFactory.java:107) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:342) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:190) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:120) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:132) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:110) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:340) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:317) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:107) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:95) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
    at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) [?:1.8.0_292]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_292]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_292]
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_292]
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:1.8.0_292]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_292]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_292]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_292]
Caused by: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
    at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:1.8.0_292]
    at sun.security.ssl.TransportContext.fatal(TransportContext.java:324) ~[?:1.8.0_292]
    at sun.security.ssl.TransportContext.fatal(TransportContext.java:267) ~[?:1.8.0_292]
    at sun.security.ssl.TransportContext.fatal(TransportContext.java:262) ~[?:1.8.0_292]
    at sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:654) ~[?:1.8.0_292]
    at sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473) ~[?:1.8.0_292]
    at sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369) ~[?:1.8.0_292]
    at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:377) ~[?:1.8.0_292]
    at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444) ~[?:1.8.0_292]
    at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:422) ~[?:1.8.0_292]
    at sun.security.ssl.TransportContext.dispatch(TransportContext.java:182) ~[?:1.8.0_292]
    at sun.security.ssl.SSLTransport.decode(SSLTransport.java:152) ~[?:1.8.0_292]
    at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1383) ~[?:1.8.0_292]
    at sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1291) ~[?:1.8.0_292]
    at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:435) ~[?:1.8.0_292]
    at org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:436) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:384) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.connectSocket(SdkTLSSocketFactory.java:142) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:374) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    ... 63 more
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
    at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:456) ~[?:1.8.0_292]
    at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:323) ~[?:1.8.0_292]
    at sun.security.validator.Validator.validate(Validator.java:271) ~[?:1.8.0_292]
    at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:315) ~[?:1.8.0_292]
    at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:223) ~[?:1.8.0_292]
    at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:129) ~[?:1.8.0_292]
    at sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:638) ~[?:1.8.0_292]
    at sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473) ~[?:1.8.0_292]
    at sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369) ~[?:1.8.0_292]
    at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:377) ~[?:1.8.0_292]
    at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444) ~[?:1.8.0_292]
    at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:422) ~[?:1.8.0_292]
    at sun.security.ssl.TransportContext.dispatch(TransportContext.java:182) ~[?:1.8.0_292]
    at sun.security.ssl.SSLTransport.decode(SSLTransport.java:152) ~[?:1.8.0_292]
    at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1383) ~[?:1.8.0_292]
    at sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1291) ~[?:1.8.0_292]
    at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:435) ~[?:1.8.0_292]
    at org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:436) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:384) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.connectSocket(SdkTLSSocketFactory.java:142) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:374) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
    ... 63 more
--
Robert Cullen
240-475-4490
Reply | Threaded
Open this post in threaded view
|

Re: Connecting to MINIO Operator/Tenant via SSL

Nico Kruber-3
Just a hunch:
Your command to start the job is only submitting the Flink job to an existing
cluster. Did you also configure the certificates on the cluster's machines
(because they would ultimately do these checks, not your local machine
submitting the job)?
-> You can specify additional JVM parameters for TMs and JMs as shown in [1]


Nico

[1] https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/
deployment/config/#env-java-opts

On Tuesday, 18 May 2021 15:13:45 CEST Robert Cullen wrote:

> The new MINIO operator/tenant model requires connection over SSL. I’ve
> added the 3 public certs that MINIO provides using keytool to the
> truststore and passing the JVM params via command line to flink as follows:
>
> root@flink-client:/opt/flink# ./bin/flink run --detached --target
> kubernetes-session -Dkubernetes.cluster-id=flink-jobmanager
> -Dkubernetes.namespace=flink -Djavax.net.ssl.trustStore=$JAVA
> _HOME/lib/security/cacerts -Djavax.net.ssl.trustStorePassword=changeit
> ./usrlib/flink-job.jar
>
> But there is still an SSL validation error:
>
>
> 2021-05-18 13:01:53,635 DEBUG com.amazonaws.auth.AWS4Signer
>                     [] - AWS4 String to Sign: '"AWS4-HMAC-SHA256
> 20210518T130153Z
> 20210518/us-east-1/s3/aws4_request
> b38391c7efd22a9ed0bceb93d460732bdd632f2acccf7e9d2d1baa30be69ced2"
> 2021-05-18 13:01:53,636 DEBUG
> com.amazonaws.http.conn.ssl.SdkTLSSocketFactory              [] -
> connecting to /10.42.0.133:9000
> 2021-05-18 13:01:53,636 DEBUG
> com.amazonaws.http.conn.ssl.SdkTLSSocketFactory              [] -
> Connecting socket to /10.42.0.133:9000 with timeout 5000
> 2021-05-18 13:01:53,636 DEBUG
> com.amazonaws.http.conn.ssl.SdkTLSSocketFactory              [] -
> Enabled protocols: [TLSv1.2]
> 2021-05-18 13:01:53,636 DEBUG
> com.amazonaws.http.conn.ssl.SdkTLSSocketFactory              [] -
> Enabled cipher suites:[TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
> TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
> TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
> TLS_RSA_WITH_AES_256_GCM_SHA384,
> TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384,
> TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384,
> TLS_DHE_RSA_WITH_AES_256_GCM_SHA384,
> TLS_DHE_DSS_WITH_AES_256_GCM_SHA384,
> TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
> TLS_RSA_WITH_AES_128_GCM_SHA256,
> TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256,
> TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256,
> TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,
> TLS_DHE_DSS_WITH_AES_128_GCM_SHA256,
> TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
> TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,
> TLS_RSA_WITH_AES_256_CBC_SHA256,
> TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
> TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384,
> TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
> TLS_DHE_DSS_WITH_AES_256_CBC_SHA256,
> TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
> TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
> TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA,
> TLS_ECDH_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA,
> TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,
> TLS_RSA_WITH_AES_128_CBC_SHA256,
> TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
> TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256,
> TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
> TLS_DHE_DSS_WITH_AES_128_CBC_SHA256,
> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
> TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA,
> TLS_ECDH_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA,
> TLS_DHE_DSS_WITH_AES_128_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
> 2021-05-18 13:01:53,636 DEBUG
> com.amazonaws.http.conn.ssl.SdkTLSSocketFactory              [] -
> socket.getSupportedProtocols(): [TLSv1.3, TLSv1.2, TLSv1.1, TLSv1,
> SSLv3, SSLv2Hello], socket.getEnabledProtocols(): [TLSv1.2]
> 2021-05-18 13:01:53,636 DEBUG
> com.amazonaws.http.conn.ssl.SdkTLSSocketFactory              [] - TLS
> protocol enabled for SSL handshake: [TLSv1.2, TLSv1.1, TLSv1]
> 2021-05-18 13:01:53,636 DEBUG
> com.amazonaws.http.conn.ssl.SdkTLSSocketFactory              [] -
> Starting handshake
> 2021-05-18 13:01:53,638 DEBUG
> com.amazonaws.http.conn.ClientConnectionManagerFactory       [] -
> java.lang.reflect.InvocationTargetException: null
>     at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source) ~[?:?]
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImp
> l.java:43) ~[?:1.8.0_292]
>     at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_292]
>     at
> com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(Clien
> tConnectionManagerFactory.java:76) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at com.amazonaws.http.conn.$Proxy49.connect(Unknown Source) ~[?:1.13.0]
>     at
> org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec
> .java:393) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:2
> 36) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
> ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient
> .java:185) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient
> .java:83) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient
> .java:56) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.j
> ava:72) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(Amazo
> nHttpClient.java:1330) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHtt
> pClient.java:1145) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpCli
> ent.java:802) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(Amazon
> HttpClient.java:770) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClien
> t.java:744) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpCl
> ient.java:704) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(Ama
> zonHttpClient.java:686) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550)
> ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530)
> ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5062)
> ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5008)
> ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:141
> 6) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.jav
> a:1352) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExists$1(S3AFileS
> ystem.java:373) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
> ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
> ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317)
> ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:256)
> ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:231)
> ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.jav
> a:372) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:308)
> ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> org.apache.flink.fs.s3.common.AbstractS3FileSystemFactory.create(AbstractS3
> FileSystemFactory.java:123) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> org.apache.flink.core.fs.PluginFileSystemFactory.create(PluginFileSystemFac
> tory.java:62) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at
> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:
> 506) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:407)
> ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at org.apache.flink.core.fs.Path.getFileSystem(Path.java:274)
> ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at
> org.apache.flink.runtime.state.filesystem.FsCheckpointStorageAccess.<init>(
> FsCheckpointStorageAccess.java:64) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at
> org.apache.flink.runtime.state.storage.FileSystemCheckpointStorage.createCh
> eckpointStorage(FileSystemCheckpointStorage.java:323)
> ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(Checkpoint
> Coordinator.java:321) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(Checkpoint
> Coordinator.java:240) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at
> org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.enableCheckpo
> inting(DefaultExecutionGraph.java:448) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at
> org.apache.flink.runtime.executiongraph.DefaultExecutionGraphBuilder.buildG
> raph(DefaultExecutionGraphBuilder.java:311)
> ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at
> org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.createAndRe
> storeExecutionGraph(DefaultExecutionGraphFactory.java:107)
> ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at
> org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionG
> raph(SchedulerBase.java:342) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at
> org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:
> 190) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at
> org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler
> .java:120) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at
> org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(D
> efaultSchedulerFactory.java:132) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at
> org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.c
> reateScheduler(DefaultSlotPoolServiceSchedulerFactory.java:110)
> ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at
> org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java
> :340) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at
> org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:317)
> ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory
> .internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:107)
> ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory
> .lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:95)
> ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at
> org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(Fun
> ctionUtils.java:112) ~[flink-dist_2.12-1.13.0.jar:1.13.0]
>     at
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.ja
> va:1604) [?:1.8.0_292]
>     at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [?:1.8.0_292]
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> [?:1.8.0_292] at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access
> $201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_292]
>     at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Sc
> heduledThreadPoolExecutor.java:293) [?:1.8.0_292]
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1
> 149) [?:1.8.0_292]
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:
> 624) [?:1.8.0_292]
>     at java.lang.Thread.run(Thread.java:748) [?:1.8.0_292]
> Caused by: javax.net.ssl.SSLHandshakeException: PKIX path building
> failed: sun.security.provider.certpath.SunCertPathBuilderException:
> unable to find valid certification path to requested target
>     at sun.security.ssl.Alert.createSSLException(Alert.java:131)
> ~[?:1.8.0_292] at
> sun.security.ssl.TransportContext.fatal(TransportContext.java:324)
> ~[?:1.8.0_292]
>     at sun.security.ssl.TransportContext.fatal(TransportContext.java:267)
> ~[?:1.8.0_292]
>     at sun.security.ssl.TransportContext.fatal(TransportContext.java:262)
> ~[?:1.8.0_292]
>     at
> sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts
> (CertificateMessage.java:654) ~[?:1.8.0_292]
>     at
> sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(Ce
> rtificateMessage.java:473) ~[?:1.8.0_292]
>     at
> sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(Certific
> ateMessage.java:369) ~[?:1.8.0_292]
>     at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:377)
> ~[?:1.8.0_292]
>     at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)
> ~[?:1.8.0_292]
>     at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:422)
> ~[?:1.8.0_292]
>     at sun.security.ssl.TransportContext.dispatch(TransportContext.java:182)
> ~[?:1.8.0_292]
>     at sun.security.ssl.SSLTransport.decode(SSLTransport.java:152)
> ~[?:1.8.0_292]
>     at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1383)
> ~[?:1.8.0_292]
>     at
> sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1291)
> ~[?:1.8.0_292]
>     at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:435)
> ~[?:1.8.0_292]
>     at
> org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSL
> ConnectionSocketFactory.java:436) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnec
> tionSocketFactory.java:384) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.connectSocket(SdkTLSSocketF
> actory.java:142) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(Defau
> ltHttpClientConnectionOperator.java:142)
> ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(Poolin
> gHttpClientConnectionManager.java:374)
> ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     ... 63 more
> Caused by: sun.security.validator.ValidatorException: PKIX path
> building failed:
> sun.security.provider.certpath.SunCertPathBuilderException: unable to
> find valid certification path to requested target
>     at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:456)
> ~[?:1.8.0_292]
>     at
> sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:323)
> ~[?:1.8.0_292]
>     at sun.security.validator.Validator.validate(Validator.java:271)
> ~[?:1.8.0_292]
>     at
> sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:31
> 5) ~[?:1.8.0_292]
>     at
> sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.jav
> a:223) ~[?:1.8.0_292]
>     at
> sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerIm
> pl.java:129) ~[?:1.8.0_292]
>     at
> sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts
> (CertificateMessage.java:638) ~[?:1.8.0_292]
>     at
> sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(Ce
> rtificateMessage.java:473) ~[?:1.8.0_292]
>     at
> sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(Certific
> ateMessage.java:369) ~[?:1.8.0_292]
>     at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:377)
> ~[?:1.8.0_292]
>     at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)
> ~[?:1.8.0_292]
>     at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:422)
> ~[?:1.8.0_292]
>     at sun.security.ssl.TransportContext.dispatch(TransportContext.java:182)
> ~[?:1.8.0_292]
>     at sun.security.ssl.SSLTransport.decode(SSLTransport.java:152)
> ~[?:1.8.0_292]
>     at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1383)
> ~[?:1.8.0_292]
>     at
> sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1291)
> ~[?:1.8.0_292]
>     at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:435)
> ~[?:1.8.0_292]
>     at
> org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSL
> ConnectionSocketFactory.java:436) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnec
> tionSocketFactory.java:384) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.connectSocket(SdkTLSSocketF
> actory.java:142) ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(Defau
> ltHttpClientConnectionOperator.java:142)
> ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     at
> org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(Poolin
> gHttpClientConnectionManager.java:374)
> ~[flink-s3-fs-hadoop-1.13.0.jar:1.13.0]
>     ... 63 more

--
Dr. Nico Kruber | Solutions Architect

Follow us @VervericaData Ververica
--
Join Flink Forward - The Apache Flink Conference
Stream Processing | Event Driven | Real Time
--
Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany
--
Ververica GmbH
Registered at Amtsgericht Charlottenburg: HRB 158244 B
Managing Directors: Yip Park Tung Jason, Jinwei (Kevin) Zhang, Karl Anton
Wehner

signature.asc (849 bytes) Download Attachment