0

In a table the cluster key is an int column which is a system generated number - chrg Issue is Since its defined as int datatype it can store values only uptil 2billion.

And since the data of the table is huge..by next two months load we will hit the max value that can be stored in the column beyond which loads will fail.

Hence the requirement is to change the datatype of the column to something like longint with least impact.

How can this be achieved with a minimal downtime?

1 Answer 1

0

You Cannot change the type of primary key.

So one of the approach I can think of is:

  1. Create a separate table with modified datatype.
  2. Modify your application to write data to both the tables.
  3. Then you can use spark & cassandra to read data from older table and write it to new table.
  4. Then again in your application you can stop writing to old table.

With above approach I don't think you will have major impact.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.