PostgreSQL 17 is here to revolutionize your database experience! Packed with enhanced performance, cutting-edge features, and improved scalability, this release is designed to meet the evolving demands of modern applications. Whether you're handling complex queries or scaling to new heights, PostgreSQL 17 brings unparalleled reliability and efficiency to your data management solutions.
Jethish October 03, 2024
PostgreSQL, one of the most powerful and flexible open-source databases, continues to evolve with each new release. With PostgreSQL 17, the development community has introduced an array of exciting features designed to enhance performance, scalability, and ease of use. Whether you’re managing large-scale applications or exploring the latest in database innovation, PostgreSQL 17 offers something for everyone. In this blog, we’ll dive into the most interesting features of this release, highlighting the advancements that make it a standout choice for modern database needs.
PostgreSQL 17 introduces incremental backups, enhancing the flexibility and efficiency of its backup and recovery capabilities. By backing up only the data that has changed since the last backup, incremental backups save time and storage space. This feature makes PostgreSQL 17 an appealing option for organizations seeking more effective backup strategies. There comes a new parameter called summarize_wal to perform incremental backup.
Create a directory for backups:
mkdir base_backup1/Then, take a full base backup of the instance:
pg_basebackup -D base_backup1/ -p 1717 -c fastAfter that, make some data changes on the instance and take an incremental backup:
pg_basebackup --incremental=base_backup1/backup_manifest -D base_backup2/ -p 1717 -c fastThe incremental backup will now be successfully created. To restore the backups, you can use the new feature called pg_combinebackup.
Create a new directory for restoring the backup:
mkdir restore_backupUse the following command to combine the full backup with the incremental backup and restore it:
pg_combinebackup base_backup1/ base_backup2/ -o /var/lib/pgsql/17/restore_backupPostgreSQL 17, introduces the pg_createsubscriber utility, simplifying the conversion from physical to logical replication.
pg_createsubscriber -D Subscriber_datadirectory -p Subscriber_Port -P 'host=Publisher_Ip port=Publiser_port user=postgres' -d Database -U Username -v --publication=publication name --subscription=Subscriber nameWhen we set up logical replication between a source and a target, all the data must initially be copied from the source to the target. Depending on the size of the tables in your setup, this process may take a considerable amount of time.
However, with pg_create_subscriber, you no longer need to perform the initial data copy, as it was already completed during the setup of the physical replica.
With PostgreSQL 17, the COPY command has become more user-friendly and now includes options for error handling. It introduces two new parameters: ON_ERROR and LOG_VERBOSITY.
The ON_ERROR parameter, when set to IGNORE, applies only to data type incompatibilities and to text or CSV format input files. The default behavior is ON_ERROR stop.
The LOG_VERBOSITY parameter allows users to control the level of information that is logged.
test=# COPY employee (first_name, last_name, email, hire_date, salary) FROM '/var/lib/pgsql/16/new_data.txt' WITH (DELIMITER ',', NULL '',ON_ERROR ignore,HEADER FALSE); ERROR: option "on_error" not recognized LINE 1: ...sql/16/new_data.txt' WITH (DELIMITER ',', NULL '',ON_ERROR i... test=# COPY employee (first_name, last_name, email, hire_date, salary) FROM '/var/lib/pgsql/17/new_data.txt' WITH (DELIMITER ',', NULL '',ON_ERROR ignore,LOG_VERBOSITY verbose ,HEADER FALSE); NOTICE: skipping row due to data type incompatibility at line 1 for column "last_name": "maxwell" NOTICE: skipping row due to data type incompatibility at line 2 for column "first_name": "Rexcily" NOTICE: 2 rows were skipped due to data type incompatibility COPY 1Create a json table :
`test=# create table Employee (json_col jsonb);CREATE TABLE`Insert data on it :
`test=# INSERT INTO Employee VALUES ('{ "person": [{ "name":"Jethish", "address":"Tamilnadu"},{ "name":"Rexcily", "address":"Bangalore"},{ "name":"Praveen", "Kerala"}] }');INSERT 0 1`Using JSON_TABLE function read the table ,
`test=# SELECT person.*FROM Employee,JSON_TABLE(json_col, '$.person[*]' COLUMNS (name VARCHAR(40) PATH '$.name',address VARCHAR(100) PATH '$.address')) person; name | address --------------+----------------------------------------- Jethish | Tamilnadu Rexcily | Bangalore Praveen | Kerala(3 rows)``test=# SELECT json('{"first_name": "jethish", "last_name": "maxwell"}'); json --------------------------------------------------- {"first_name": "jethish", "last_name": "maxwell"}(1 row)` `test=# SELECT JSON_SERIALIZE('{"Employee": "Jethish", "Address": "Nagercoil" , "Salary" : "10000"}' RETURNING bytea); json_serialize -------------------------------------------------------------------------------------------------------------------------------------------- \x7b22456d706c6f796565223a20224a657468697368222c202241646472657373223a20224e61676572636f696c22202c202253616c61727922203a20223130303030227d (1 row)` `test=# select json_scalar(now()); json_scalar ------------------------------------ "2024-10-03T07:06:45.014598+00:00" (1 row)``test=# SELECT JSON_QUERY(jsonb '{"Employee": "jethish", "employee_details": ["jethish", "10000"]}', '$.employee_details'); json_query ---------------------- ["jethish", "10000"] (1 row)` `test=# SELECT JSON_EXISTS(jsonb '{"key1": [1, 2, 3]}', '$.key1[1]'); json_exists ------------- t (1 row)` `test=# SELECT JSON_VALUE(jsonb '[1, 2]', '$[0]'); json_value ------------ 1 (1 row)`VACUUM, ANALYZE, CLUSTER, REFRESH MATERIALIZED VIEW, REINDEX, and LOCK TABLE operations on tables can now be performed without superuser privileges, thanks to the introduction of a new role called pg_maintain.
Currently, there is a table named maintenance.
`pg_maintain=# \dt+ List of relations Schema | Name | Type | Owner | Persistence | Access method | Size | Description --------+-------------+-------+----------+-------------+---------------+-------+------------- public | maintenance | table | postgres | permanent | heap | 48 kB | (1 row)pg_maintain=# \dp+ Access privileges Schema | Name | Type | Access privileges | Column privileges | Policies --------+--------------------+----------+-------------------+-------------------+---------- public | maintenance | table | | | public | maintenance_id_seq | sequence | | | (2 rows)`As observed, no privileges have been granted for the table.
Next, create a new user with no privileges:
pg_maintain=# CREATE USER mafiree;CREATE ROLELog in to the server using the newly created user, mafiree:
[postgres@mafiree ~]$ psql -p 1717 -U mafiree pg_maintainAfter successfully logging in, attempt to perform a read and vacuum operation:
pg_maintain=> SELECT * FROM maintenance;ERROR: permission denied for table maintenancepg_maintain=> VACUUM maintenance;WARNING: permission denied to vacuum "maintenance", skipping itVACUUMNow map the pg_maintain role with the new user mafiree.
pg_maintain=# GRANT pg_maintain TO mafiree;GRANT ROLENext, check the vacuum operation again:
[postgres@mafiree ~]$ psql -p 1717 -U mafiree pg_maintainpg_maintain=> VACUUM maintenance;VACUUMThe vacuum process completed successfully. However, the user still cannot access the table:
pg_maintain=> SELECT * FROM maintenance;ERROR: permission denied for table maintenanceAlso we can provide the privilege for a specific table also.
pg_maintain=# grant maintain on table maintenance to mafiree_one ;GRANTThe EXPLAIN command now features two new options: SERIALIZE and MEMORY. These options help database administrators and developers analyze and optimize the performance of SQL queries in PostgreSQL. By reviewing the execution plan, you can identify areas for improvement and ensure your queries execute efficiently.
test=# EXPLAIN (ANALYZE, SERIALIZE, MEMORY, BUFFERS) select * from employee; QUERY PLAN ------------------------------------------------------------------------------------------------------- Seq Scan on employee (cost=0.00..12.50 rows=250 width=290) (actual time=0.007..0.008 rows=8 loops=1) Buffers: shared hit=1 dirtied=1 Planning: Buffers: shared hit=54 Memory: used=9kB allocated=16kB Planning Time: 0.122 ms Serialization: time=0.006 ms output=1kB format=text Execution Time: 0.021 ms(8 rows)PostgreSQL 17 enhances functionality by adding support for the RETURNING clause. This improvement enables developers to retrieve and return the rows affected by the MERGE operation in a single step, thereby minimizing the need for additional queries.
Create a Table Called Products
CREATE TABLE products ( id SERIAL PRIMARY KEY, name VARCHAR(100) NOT NULL, category VARCHAR(50), price NUMERIC(10, 2) NOT NULL);Using the MERGE command and the RETURNING clause, introduced in the PostgreSQL 17 update, you can update an existing value:
test=# MERGE INTO products pUSING (VALUES ('Laptop', 'Electronics', '899.99'::numeric)) v(name, category, price)ON p.name = v.nameWHEN MATCHED THEN UPDATE SET category = v.category, price = v.priceWHEN NOT MATCHED THENINSERT (name, category, price) VALUES (v.name, v.category, v.price)RETURNING *; name | category | price | id | name | category | price --------+-------------+--------+----+--------+-------------+-------- Laptop | Electronics | 899.99 | 1 | Laptop | Electronics | 899.99(1 row)This command updates the existing data and returns the output.
Now, let’s insert data using the MERGE command and the RETURNING clause:
test=# MERGE INTO products pUSING (VALUES ('Desk', 'Furniture', '199.99'::numeric)) v(name, category, price)ON p.name = v.nameWHEN MATCHED THENUPDATE SET category = v.category, price = v.priceWHEN NOT MATCHED THENINSERT (name, category, price)VALUES (v.name, v.category, v.price)RETURNING *; name | category | price | id | name | category | price ------+-----------+--------+----+------+-----------+-------- Desk | Furniture | 199.99 | 4 | Desk | Furniture | 199.99(1 row)Upgrading to PostgreSQL 17 opens the door to a range of powerful features that can significantly enhance your database performance and overall user experience. Whether you’re looking to improve query efficiency, implement more advanced data types, or streamline your administrative processes, PostgreSQL 17 offers the tools you need to stay competitive in a rapidly evolving landscape. If you have any questions or require assistance with your PostgreSQL upgrade, feel free to reach out to us at sales@mafiree.com for dedicated PostgreSQL support. Together, let’s unlock the full potential of your data with PostgreSQL 17.
Miru IT Park, Vallankumaranvillai,
Nagercoil, Tamilnadu - 629 002.
Unit 303, Vanguard Rise,
5th Main, Konena Agrahara,
Old Airport Road, Bangalore - 560 017.
Call: +91 6383016411
Email: sales@mafiree.com