Does the JDBC driver take table level encoding into account?
We have a decades old database which uses latin1 as its default character set, but individual tables have a deviating charset, utf8. Updating columns via setString works fine, using setCharachterStream does not. It fails on a
Caused by: java.sql.SQLException: Incorrect string value: '\xF2f een...' for column 'message' at row 1
at com.mysql@8.2.0//com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:130)
at com.mysql@8.2.0//com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
at com.mysql@8.2.0//com.mysql.cj.jdbc.ServerPreparedStatement.serverExecute(ServerPreparedStatement.java:555)
at com.mysql@8.2.0//com.mysql.cj.jdbc.ServerPreparedStatement.executeInternal(ServerPreparedStatement.java:339)
at com.mysql@8.2.0//com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1054)
at com.mysql@8.2.0//com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1003)
at com.mysql@8.2.0//com.mysql.cj.jdbc.ClientPreparedStatement.executeLargeUpdate(ClientPreparedStatement.java:1312)
at com.mysql@8.2.0//com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdate(ClientPreparedStatement.java:988)
at
org.jboss.ironjacamar.jdbcadapters@1.5.3.Final//org.jboss.jca.adapters.jdbc.WrappedPreparedStatement.executeUpdate(WrappedPreparedStatement.java:537)
at deployment.ioserver.ear//io.ebeaninternal.server.bind.DataBind.executeUpdate(DataBind.java:102) [ebean-core-14.0.2-javax.jar:14.0.2-javax]
at deployment.ioserver.ear//io.ebeaninternal.server.persist.dml.InsertHandler.execute(InsertHandler.java:105) [ebean-core-14.0.2-javax.jar:14.0.2-javax]
at deployment.ioserver.ear//io.ebeaninternal.server.persist.dml.DmlHandler.executeNoBatch(DmlHandler.java:88) [ebean-core-14.0.2-javax.jar:14.0.2-javax]
at deployment.ioserver.ear//io.ebeaninternal.server.persist.dml.DmlBeanPersister.execute(DmlBeanPersister.java:69) [ebean-core-14.0.2-javax.jar:14.0.2-javax]
... 132 more
According to the JDBC documentation setCharacterStream is responsible for using the correct encoding, so I cannot fault the ORM library for not doing a conversion.
From what I gather examining the sources of mysql-connector-j, it only looks at the session for encoding information, not at table level. If that is correct, then that would mean the JDBC driver does not support a 'feature' of the database that it allows for per-table charset. Or is there a way to get it to take those charsets into account?