1059

What's the difference between the text data type and the character varying (varchar) data types?

According to the documentation

If character varying is used without length specifier, the type accepts strings of any size. The latter is a PostgreSQL extension.

and

In addition, PostgreSQL provides the text type, which stores strings of any length. Although the type text is not in the SQL standard, several other SQL database management systems have it as well.

So what's the difference?

14 Answers 14

1273

There is no difference, under the hood it's all varlena (variable length array).

Check this article from Depesz: http://www.depesz.com/index.php/2010/03/02/charx-vs-varcharx-vs-varchar-vs-text/

A couple of highlights:

To sum it all up:

  • char(n) – takes too much space when dealing with values shorter than n (pads them to n), and can lead to subtle errors because of adding trailing spaces, plus it is problematic to change the limit
  • varchar(n) – it's problematic to change the limit in live environment (requires exclusive lock while altering table)
  • varchar – just like text
  • text – for me a winner – over (n) data types because it lacks their problems, and over varchar – because it has distinct name

The article does detailed testing to show that the performance of inserts and selects for all 4 data types are similar. It also takes a detailed look at alternate ways on constraining the length when needed. Function based constraints or domains provide the advantage of instant increase of the length constraint, and on the basis that decreasing a string length constraint is rare, depesz concludes that one of them is usually the best choice for a length limit.

22
  • 70
    @axiopisty It's a great article. You could just say, "Could you pull in some excerpts in case the article ever goes down?" I've tried to briefly summarize the article's content/conclusions. I hope this is enough to ease your concerns.
    – jpmc26
    Commented Apr 10, 2014 at 1:43
  • 37
    @axiopisty, strictly speaking, the initial answer was saying "under the hood it's all varlena", which is certainly useful information that distinguishes this answer from a link-only answer.
    – Bruno
    Commented Jul 22, 2014 at 18:30
  • 50
    One thing to keep in mind with a limitless string is that they open the potential for abuse. If you allow a user to have a last name of any size, you may have someone storing LARGE amounts of info in your last name field. In an article about the development of reddit, they give the advise to "Put a limit on everything". Commented Mar 12, 2015 at 21:51
  • 12
    @MarkHildreth Good point, though generally constraints like that are enforced further out in an application these days—so that the rules (and attempted violations/retries) can be handled smoothly by the UI. If someone does still want to do this sort of thing in the database they could use constraints. See blog.jonanin.com/2013/11/20/postgresql-char-varchar which includes "an example of using TEXT and constraints to create fields with more flexibility than VARCHAR".
    – Ethan
    Commented Dec 14, 2015 at 13:03
  • 86
    It is really alarming that this comment has so much votes. text should never, ever be considered "a winner over varchar" out of the box just because it allows me to input strings of any length, but exactly the opposite, you should really think about what kind of data you want to store before allowing your users to input strings of any lenght. And NO, "let the Frontend handle it" is definitely not acceptable and a very bad development practice. Really surprising to see a lot of devs doing this nowadays. Commented Jul 2, 2020 at 11:20
162

As "Character Types" in the documentation points out, varchar(n), char(n), and text are all stored the same way. The only difference is extra cycles are needed to check the length, if one is given, and the extra space and time required if padding is needed for char(n).

However, when you only need to store a single character, there is a slight performance advantage to using the special type "char" (keep the double-quotes — they're part of the type name). You get faster access to the field, and there is no overhead to store the length.

I just made a table of 1,000,000 random "char" chosen from the lower-case alphabet. A query to get a frequency distribution (select count(*), field ... group by field) takes about 650 milliseconds, vs about 760 on the same data using a text field.

3
  • 33
    technically the quotes aren't part of the type name. they are needed to differentiate it from the char keyword.
    – Jasen
    Commented Jul 20, 2015 at 5:21
  • 50
    Technically you are correct @Jasen... Which, of course, is the best kind of correct
    – JohannesH
    Commented Aug 6, 2015 at 13:17
  • 2
    datatype "char" is not char?? It is valid in nowadays of PostgreSQL 11+? ... Yes: "The type "char" (note the quotes) is different from char(1) in that it only uses one byte of storage. It is internally used in the system catalogs as a simplistic enumeration type.", guide/datatype-character. Commented Dec 2, 2018 at 11:48
125

(this answer is a Wiki, you can edit - please correct and improve!)

UPDATING BENCHMARKS FOR 2016 (pg9.5+)

And using "pure SQL" benchmarks (without any external script)

  1. use any string_generator with UTF8

  2. main benchmarks:

2.1. INSERT

2.2. SELECT comparing and counting


CREATE FUNCTION string_generator(int DEFAULT 20,int DEFAULT 10) RETURNS text AS $f$
  SELECT array_to_string( array_agg(
    substring(md5(random()::text),1,$1)||chr( 9824 + (random()*10)::int )
  ), ' ' ) as s
  FROM generate_series(1, $2) i(x);
$f$ LANGUAGE SQL IMMUTABLE;

Prepare specific test (examples)

DROP TABLE IF EXISTS test;
-- CREATE TABLE test ( f varchar(500));
-- CREATE TABLE test ( f text); 
CREATE TABLE test ( f text  CHECK(char_length(f)<=500) );

Perform a basic test:

INSERT INTO test  
   SELECT string_generator(20+(random()*(i%11))::int)
   FROM generate_series(1, 99000) t(i);

And other tests,

CREATE INDEX q on test (f);

SELECT count(*) FROM (
  SELECT substring(f,1,1) || f FROM test WHERE f<'a0' ORDER BY 1 LIMIT 80000
) t;

... And use EXPLAIN ANALYZE.

UPDATED AGAIN 2018 (pg10)

little edit to add 2018's results and reinforce recommendations.


Results in 2016 and 2018

My results, after average, in many machines and many tests: all the same
(statistically less than standard deviation).

Recommendation

  • Use text datatype,
    avoid old varchar(x) because sometimes it is not a standard, e.g. in CREATE FUNCTION clauses varchar(x)varchar(y).

  • express limits (with same varchar performance!) by with CHECK clause in the CREATE TABLE
    e.g. CHECK(char_length(x)<=10).
    With a negligible loss of performance in INSERT/UPDATE you can also to control ranges and string structure
    e.g. CHECK(char_length(x)>5 AND char_length(x)<=20 AND x LIKE 'Hello%')

5
  • 1
    So it does not matter than I made all of my columns varchar instead of text? I did not specify the length even though some are only 4 - 5 characters and certainly not 255.
    – trench
    Commented Jun 10, 2016 at 23:52
  • 2
    @trench yes, it does not matter Commented Jun 20, 2016 at 14:02
  • 2
    cool, I redid it to be safe and I made everything text anyway. It worked well and it was super easy to add millions of historical records quickly anyways.
    – trench
    Commented Jun 20, 2016 at 14:43
  • @trench and reader: the only exception is the faster datatype "char", that is not char, even in nowadays of PostgreSQL 11+. As the guide/datatype-character says "The type "char" (note the quotes) is different from char(1) in that it only uses one byte of storage. It is internally used in the system catalogs as a simplistic enumeration type.". Commented Dec 2, 2018 at 11:55
  • 5
    still valid with pg11 in 2019: text>varchar(n)>text_check>char(n) Commented Feb 8, 2019 at 11:18
61

On PostgreSQL manual

There is no performance difference among these three types, apart from increased storage space when using the blank-padded type, and a few extra CPU cycles to check the length when storing into a length-constrained column. While character(n) has performance advantages in some other database systems, there is no such advantage in PostgreSQL; in fact character(n) is usually the slowest of the three because of its additional storage costs. In most situations text or character varying should be used instead.

I usually use text

References: http://www.postgresql.org/docs/current/static/datatype-character.html

41

In my opinion, varchar(n) has it's own advantages. Yes, they all use the same underlying type and all that. But, it should be pointed out that indexes in PostgreSQL has its size limit of 2712 bytes per row.

TL;DR: If you use text type without a constraint and have indexes on these columns, it is very possible that you hit this limit for some of your columns and get error when you try to insert data but with using varchar(n), you can prevent it.

Some more details: The problem here is that PostgreSQL doesn't give any exceptions when creating indexes for text type or varchar(n) where n is greater than 2712. However, it will give error when a record with compressed size of greater than 2712 is tried to be inserted. It means that you can insert 100.000 character of string which is composed by repetitive characters easily because it will be compressed far below 2712 but you may not be able to insert some string with 4000 characters because the compressed size is greater than 2712 bytes. Using varchar(n) where n is not too much greater than 2712, you're safe from these errors.

5
  • Later postgres errors on trying to create indexing for text only works for varchar (version without the (n)). Only tested with embedded postgres though.
    – arntg
    Commented Nov 7, 2018 at 16:20
  • 2
    Refering to : stackoverflow.com/questions/39965834/… which has a link to PostgreSQL Wiki: wiki.postgresql.org/wiki/… has max Row size as 400GB, from that it looks like the stated 2712 byte limit per row is wrong. Maximum size for a database? unlimited (32 TB databases exist) Maximum size for a table? 32 TB Maximum size for a row? 400 GB Maximum size for a field? 1 GB Maximum number of rows in a table? unlimited Commented Dec 2, 2018 at 9:57
  • @BillWorthington The numbers you posted don't take into account of putting indexes though. 2712 byte is about btree's max limits, it's an implementation detail so that you can't find it on the documents. However, you can easily test it yourself or just google it by searching "postgresql index row size exceeds maximum 2712 for index" e.g..
    – yakya
    Commented Dec 2, 2018 at 10:52
  • I am new to PostgeSQL, so am not the expert. I am working on a project where I want to store news articles in a column in a table. Looks like the text column type is what I will use. A total row size of 2712 bytes sounds way too low for a database that is suppose to be close to the same level as Oracle. Do I understand you correctly that you are referring to indexing a large text field? Not trying to challenge or argue with you, just trying to understand the real limits. If there are no indexes involved, then would the row limit be 400GB as in the wiki?? Thanks for your fast response. Commented Dec 2, 2018 at 13:32
  • 1
    @BillWorthington You should research about Full Text Search. Check this link e.g.
    – yakya
    Commented Dec 2, 2018 at 13:37
31

text and varchar have different implicit type conversions. The biggest impact that I've noticed is handling of trailing spaces. For example ...

select ' '::char = ' '::varchar, ' '::char = ' '::text, ' '::varchar = ' '::text

returns true, false, true and not true, true, true as you might expect.

3
  • 3
    How is this possible? If a = b and a = c then b = c. Commented Apr 29, 2020 at 4:04
  • 3
    Tested, and it is indeed true. Impossible, but true. Very, very strange. Commented Nov 3, 2020 at 9:36
  • 3
    It's because the = operator is not only comparing the stuff, but it also does some conversions to find a common type for the values. It's pretty common behaviour in various languages, and the used conversions also differ between languages. For example in JavaScript you can see that [0 == '0.0', 0 == '0', '0.0' == '0'] -> [true, true, false]
    – Arsen7
    Commented Jul 20, 2021 at 10:24
18

The difference is between tradition and modern.

Traditionally you were required to specify the width of each table column. If you specify too much width, expensive storage space is wasted, but if you specify too little width, some data will not fit. Then you would resize the column, and had to change a lot of connected software, fix introduced bugs, which is all very cumbersome.

Modern systems allow for unlimited string storage with dynamic storage allocation, so the incidental large string would be stored just fine without much waste of storage of small data items.

While a lot of programming languages have adopted a data type of 'string' with unlimited size, like C#, javascript, java, etc, a database like Oracle did not.

Now that PostgreSQL supports 'text', a lot of programmers are still used to VARCHAR(N), and reason like: yes, text is the same as VARCHAR, except that with VARCHAR you MAY add a limit N, so VARCHAR is more flexible.

You might as well reason:

why should we bother using the mouthful "VARCHAR WITHOUT N", now that we can simplify our life with just "TEXT"?

In my recent years with Oracle, I have used CHAR(N) or VARCHAR(N) on very few occasions. Because Oracle does (did?) not have an unlimited string type, I used for most string columns VARCHAR(2000), where 2000 was at some time the maximum for VARCHAR, and in all practical purposes not much different from 'infinite'.

Now that I am working with PostgreSQL, I see TEXT as real progress. No more emphasis on the VAR feature of the CHAR type. No more emphasis on let's use VARCHAR without N. Besides, typing TEXT saves 3 keystrokes compared to VARCHAR.

Younger colleagues would now grow up without even knowing that in the old days there were no unlimited strings. Just like that in most projects they don't have to know about assembly programming.

Update: Azure type String

Apparently, the modern system of Azure SQL has a generic text type named String, like PostgreSQL type Text, but with an unconfigurable limit of just 500 characters. In Azure, type String seems more commonly used than Varchar(N) which has a limit of 4000. Is this progress?

3
  • 1
    very informative answer, thanks!
    – devnull Ψ
    Commented Jul 22, 2023 at 11:12
  • 1
    @aderchox I guess you meant to comment: Varchar without N is kept for backwards...
    – Roland
    Commented Oct 1, 2023 at 23:37
  • 1
    Edit: So the gist of it is, just use TEXT. VARCHAR without N is kept for backwards compatibility reasons. (Yeah I've edited it now, thanks :).)
    – aderchox
    Commented Oct 2, 2023 at 5:01
11

A good explanation from http://www.sqlines.com/postgresql/datatypes/text:

The only difference between TEXT and VARCHAR(n) is that you can limit the maximum length of a VARCHAR column, for example, VARCHAR(255) does not allow inserting a string more than 255 characters long.

Both TEXT and VARCHAR have the upper limit at 1 Gb, and there is no performance difference among them (according to the PostgreSQL documentation).

6

Somewhat OT: If you're using Rails, the standard formatting of webpages may be different. For data entry forms text boxes are scrollable, but character varying (Rails string) boxes are one-line. Show views are as long as needed.

0
3

If you only use TEXT type you can run into issues when using AWS Database Migration Service:

Large objects (LOBs) are used but target LOB columns are not nullable

Due to their unknown and sometimes large size, large objects (LOBs) require more processing and resources than standard objects. To help with tuning migrations of systems that contain LOBs, AWS DMS offers the following options

If you are only sticking to PostgreSQL for everything probably you're fine. But if you are going to interact with your db via ODBC or external tools like DMS you should consider not using TEXT for everything.

1
  • Also for ODBC: Crystal reports considers text a "memo" and won't allow any joins on it even if it's an FK. Varchar (limited or unlimited) works fine
    – btraas
    Commented Oct 5, 2022 at 20:18
1

I wasted way too much time because of using varchar instead of text for PostgreSQL arrays.

PostgreSQL Array operators do not work with string columns. Refer these links for more details: (https://github.com/rails/rails/issues/13127) and (http://adamsanderson.github.io/railsconf_2013/?full#10).

1
  • Ran into the exact same problem... Commented Jan 11, 2022 at 12:43
1

I have found another annoying difference between them, which gave me a bit of a hard time.

Although VARCHAR (without a size) and TEXT mean more or less the same thing, PostgreSQL still draws a distinction.

The string_agg() function expects either text or bytea data types, and will return the corresponding data type. That doesn’t stop you from using it with the other string data types, such as varchar and its variations.

However, when using it within a user defined function, you can get into strife. For example:

CREATE FUNCTION test(genrename VARCHAR)
RETURNS TABLE (category VARCHAR, items VARCHAR)
LANGUAGE PLPGSQL AS $$ BEGIN
    RETURN QUERY
    SELECT cat, string_agg(item, '|')
    FROM data
    GROUP BY cat
END $$;

This will result in an error, since the return type of string_agg is not the same as varchar. Changing the return table to items text fixes this.

In other words, PostgreSQL may treat them as the same, it still maintains a pedantic distinction between varchar and text.

1

I would just add one more thing missing in other's answers. It's better to use text since string functions in Postgres uses text type(as input and output types). As mentioned in the Postgres official docs.

text is PostgreSQL's native string data type, in that most built-in functions operating on strings are declared to take or return text not character varying. For many purposes, character varying acts as though it were a domain over text.

The type name varchar is an alias for character varying

-2

character varying(n), varchar(n) - (Both the same). value will be truncated to n characters without raising an error.

character(n), char(n) - (Both the same). fixed-length and will pad with blanks till the end of the length.

text - Unlimited length.

Example:

Table test:
   a character(7)
   b varchar(7)

insert "ok    " to a
insert "ok    " to b

We get the results:

a        | (a)char_length | b     | (b)char_length
----------+----------------+-------+----------------
"ok     "| 7              | "ok"  | 2
3
  • 14
    While MySQL will silently truncate the data when the value exceeds the column size, PostgreSQL will not and will raise a "value too long for type character varying(n)" error.
    – gsiems
    Commented Mar 14, 2018 at 14:56
  • @gsiems Neither will truncate. MSSQL will throw an exception (msg 8152, level 16, state 30: String or binary data would be truncated). PostgreSQL will do the same, EXCEPT if the overflow is only spaces (then, it will truncate without raising an exception)
    – JCKödel
    Commented Apr 8, 2021 at 22:55
  • @JCKödel gsiems was talking about MySQL, not MSSQL.
    – cdonner
    Commented Apr 20, 2022 at 14:35

Not the answer you're looking for? Browse other questions tagged or ask your own question.