0

I want to insert rows into a table with 118 column. I experience that it works perfectly with mssql-jdbc-8.2.2.jre8.jar but unfortunately with any later issue (I testes all from 9.2.1 to 12.8.1 ) gives back java.sql.BatchUpdateException, with a note that the column length at the 118. position is invalid. It happens only if the 118. column is VARCHAR or NVARCHAR. No error in the case of FLOAT or other numeric types. So if anybody has already faced with this, or know any solution, please let me know. Here is my PreparedStatement's command:

insert into tt
(CUSTOMER_ID,Value_date,_default,J_termek1,J_termek2,J_termek1_appl_pont,J_termek1_beh_pont,J_termek2_appl_pont,J_termek2_beh_pont,J_t
ermek1_appl_intercept,J_termek1_appl_beta,J_termek1_beh_intercept,J_termek1_beh_beta,J_t
ermek1_appl_weight,J_termek1_beh_weight,J_REL_def_STD,J_macro_correction,J_termek2_appl_intercept,J_termek2_appl_beta,J_termek2_beh_in
tercept,J_termek2_beh_beta,J_termek2_appl_weight,J_termek2_beh_weight,J_termek1_weight,J_termek2_weight,L_termek1,L_
termek2,L_termek1_appl_pont,L_termek1_beh_pont,L_termek2_appl_pont,L_termek2_beh_pont,L_termek1_appl_intercept,L_termek1_appl_beta,L_t
ermek1_beh_intercept,L_termek1_beh_beta,L_termek1_appl_weight,L_termek1_beh_weight,L_REL_def_STD,L_macro_correction,
L_termek2_appl_intercept,L_termek2_appl_beta,L_termek2_beh_intercept,L_termek2_beh_beta,L_termek2_appl_weight,L_termek2_beh_weight,L_t
ermek1_weight,L_termek2_weight,S_termek1,S_termek2,S_termek1_appl_pont,S_termek1_beh_pont,S_termek2_appl_pont,S_term
ek2_beh_pont,S_termek1_appl_intercept,S_termek1_appl_beta,S_termek1_beh_intercept,S_termek1_beh_beta,S_termek1_appl_weight,S_termek1_b
eh_weight,S_REL_def_STD,S_macro_correction,S_termek2_appl_intercept,S_termek2_appl_beta,S_termek2_beh_intercept,S_te
rmek2_beh_beta,S_termek2_appl_weight,S_termek2_beh_weight,S_termek1_weight,S_termek2_weight,O_termek1,O_termek2,O_termek1_appl_pont,O_
termek1_beh_pont,O_termek2_appl_pont,O_termek2_beh_pont,O_termek1_appl_intercept,O_termek1_appl_beta,O_termek1_beh_i
ntercept,O_termek1_beh_beta,O_termek1_appl_weight,O_termek1_beh_weight,O_REL_def_STD,O_macro_correction,O_termek2_appl_intercept,O_ter
mek2_appl_beta,O_termek2_beh_intercept,O_termek2_beh_beta,O_termek2_appl_weight,O_termek2_beh_weight,O_termek1_weigh
t,O_termek2_weight,H_termek1,H_termek2,H_termek1_appl_pont,H_termek1_beh_pont,H_termek2_appl_pont,H_termek2_beh_pont,H_termek1_appl_in
tercept,H_termek1_appl_beta,H_termek1_beh_intercept,H_termek1_beh_beta,H_termek1_appl_weight,H_termek1_beh_weight,H_
REL_def_STD,H_macro_correction,H_termek2_appl_intercept,H_termek2_appl_beta,H_termek2_beh_intercept,H_termek2_beh_beta,H_termek2_appl_
weight,H_termek2_beh_weight,H_termek1_weight,H_termek2_weight,H_atlag_pd,spare3,BEH_PD,behpd,txt1) VALUES
(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?
,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)

I tried to use NVARCHAR instead of VARCHAR without success. I also tried adding one more VARCHAR column to the insert (I created a new table before that with that additional column) and in this case the error message said that the error happens at 119. column (the new one). Stack trace:

java.sql.BatchUpdateException: A rendszer érvénytelen oszlophosszt kapott a BCP-ügyféltől a következő oszlopazonosítóhoz: 118.
   at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeBatch(SQLServerPreparedStatement.java:2042)
   at SAS2SQL.writeToOutput(SAS2SQL.java:1098)
   at SAS2SQL.main(SAS2SQL.java:121)

The error message is in Hungarian. In English it is:

The system received an invalid column length from the BCP client for the following column ID: 118

The part of the code arranging the insert:

        String str = stdin.readLine();
        while (str != null) {
            counter++;
            //log(str);
            split(str);
            //log(arrayToText(elemek));
            int k;
            for (int j = 0; j < fields.length; j++) {
                tip=fieldtypes[j];
                textelem=elemek[j];
                k=j+1;
                if (textelem==null)pstmt.setString(k,null);
                else {
                    if (tip==Types.VARCHAR || tip==Types.NVARCHAR ) {
                    pstmt.setString(k, textelem);
                    
                    //if (insertedRows<15) log("TXT: "+textelem);
                    }
                if (tip==Types.INTEGER ||
                    tip==Types.BIGINT ||
                    tip==Types.SMALLINT ||
                    tip==Types.TINYINT 
                         ){
                         //log2("INT:"+Float.parseFloat());
                         if (textelem.trim().equals(".")) pstmt.setNull(k, java.sql.Types.FLOAT);
                         else pstmt.setFloat(k, Float.parseFloat(textelem));
                         }
                if (tip==Types.FLOAT ||
                    tip==Types.NUMERIC ||
                    tip==Types.DOUBLE ||
                    tip==Types.DECIMAL ) {
                         if (textelem.trim().equals(".")) pstmt.setNull(k, java.sql.Types.DOUBLE);
                         else pstmt.setDouble(k, Double.parseDouble(textelem));
                        }
                if (tip==Types.DATE) {
                        if (textelem.trim().equals(".")) pstmt.setNull(k, java.sql.Types.DATE);
                        else pstmt.setDate(k, java.sql.Date.valueOf(textelem));
                        //log2("DATE:"+SASResultSet.getDate(j).toString());
                     }
                if (tip==Types.TIMESTAMP) {
                        if (textelem.trim().equals(".")) pstmt.setNull(k, java.sql.Types.TIMESTAMP);
                        else pstmt.setTimestamp(k,Timestamp.valueOf(textelem.trim()));
                     }
                if (tip==Types.TIME) {
                        if (textelem.trim().equals(".")) pstmt.setNull(k, java.sql.Types.TIME);
                        else pstmt.setTime(k,Time.valueOf(textelem.trim()));
                        //log2("TIME:"+SASResultSet.getTime(j).toString());
                     }
                    }
                 }
            str = stdin.readLine(); 
            insertedRows++;
            pstmt.addBatch();
            bulkcounter++;  
            if (bulkcounter==batchSize){
                pstmt.executeBatch();
                pstmt.clearBatch();
                bulkcounter=0;
                if (monitor)log(getTime()+" / "+insertedRows);
                }
             //pstmt.executeUpdate();
            }
        if (bulkcounter!=0) {
            pstmt.executeBatch();
            pstmt.clearBatch();
            if (monitor)log(getTime()+" / "+insertedRows);
            }
        //if (siker==0) log(""+counter+". sor beszúrása sikertelen");   
        sqlServerConnection.commit();
        //outputConnection.commit();
        stdin.close();
        pstmt.close();
        log("Number of inserted rows: "+insertedRows); 
New contributor
Laszlo voros is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
26
  • 3
    This does not look like a normalized schema to me. 118 columns? Too many. Those subscripts suggest that it's an easy problem with a one-to-many arrangement. More flexible, too.
    – duffymo
    Commented yesterday
  • 2
    Can you post the exact error and stacktrace and the code that does the jdbc stuff? If possible, a minimally recreatable example Commented yesterday
  • 1
    That looks heavily denormalised. You have multiple different prefixes for the same columns (J_, L_, S_, etc) and then you also have things like J_termek1 and J_termek2. There should be a termek column and Prefix (or better name to describe what J_ means) column, and then 1 row per different prefix and instance of said termek. If you do need to denote if it's termek 1 or 2 add an additional column for that. The table look like it should have about 20 columns at most, not 115.
    – Thom A
    Commented yesterday
  • 2
    Pardon me, but it's not clear (to me) what the problem is. Edit your question and post the entire error message and stack trace. By the way, have you read How to Ask ?
    – Abra
    Commented yesterday
  • 3
    Add details to the question, nor in comments. Use edit.
    – aled
    Commented yesterday

1 Answer 1

0

So the problem is that the bulkcopy client has its narrower limitation than the original engine.

setUseBulkCopyForBatchInsert=false solved the problem.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.