The DevOps Guide: Managing Tables in a PostgreSQL Docker Container (Create, Update, Drop)


Running PostgreSQL within a Docker container is a standard practice for development, testing, and microservices isolation. A PostgreSQL Docker container encapsulates the entire database environment, allowing developers and DBAs to manage data structures efficiently. This guide details the essential SQL and Docker commands necessary to create, update records, and drop tables inside a live containerized PostgreSQL instance.


I. Prerequisites: Setting Up the PostgreSQL Container

Before managing tables, you must launch and connect to the PostgreSQL service running inside the Docker environment.

Step 1: Run and Start the PostgreSQL Container

Use the docker run command to create and start a background container, mapping the default port (5432) and setting the initial password.

docker run --name postgresCont -e POSTGRES_PASSWORD=pass123 -p 5432:5432 -d postgres

Step 2: Access the Container Shell

Execute the following command to open an interactive BASH shell within the running container:

docker exec -it postgresCont bash

Step 3: Connect to the PostgreSQL Server

From the container shell, connect to the PostgreSQL server using the psql client:

psql -h localhost -U postgres

Step 4: Create and Connect to a New Database

Once connected to the default server, create a database for your application (e.g., tsl_employee) and immediately connect to it using the \c meta-command.

CREATE DATABASE tsl_employee;
\c tsl_employee;

II. How to Create and Populate a Table

Step 5: Define and Create the Table

Use the CREATE TABLE Data Definition Language (DDL) command to define the table structure, including column names, data types, and integrity constraints.

CREATE TABLE tech_authors(
ID INT PRIMARY KEY NOT NULL,
NAME TEXT NOT NULL,
TYPE TEXT NOT NULL,
CATEGORY TEXT NOT NULL,
ARTICLES INT NOT NULL
);

Step 6: Insert Records into the Table

Use the INSERT INTO command to populate the new table with data. Ensure the values match the specified column data types.

INSERT INTO tech_authors VALUES
(1, 'Laiba', 'Senior', 'Docker', 50);

Step 7: Verify Table Data (SELECT)

Use the SELECT command to view all records, or specify certain columns for verification.

-- View all records
SELECT * FROM tech_authors;
-- View specific columns
SELECT ID, NAME, TYPE FROM tech_authors;

III. How to Update Records in a Table

Step 8: Update Specific Records

The UPDATE command is used to modify existing data records. It must be paired with a WHERE clause to specify which rows to change. Without a WHERE clause, the command will update all rows.

-- Display the table data before update
SELECT * FROM tech_authors;
-- Update the CATEGORY column for the author with ID=1
UPDATE tech_authors
SET CATEGORY = 'Linux'
WHERE ID = 1;
-- Verify the update
SELECT * FROM tech_authors;

IV. How to Delete Data and Drop the Table

Step 9: Delete Specific Records (DELETE)

Use the DELETE FROM command with a WHERE clause to remove one or more specific rows from the table.

-- Delete the record where ID is 5
DELETE FROM tech_authors WHERE id = 5;

Step 10: Delete All Records (TRUNCATE or Full DELETE)

To remove all records from the table, you can use the DELETE FROM command without a WHERE clause, or the faster TRUNCATE TABLE command (which resets auto-incrementing sequences).

-- Delete all records (resets the table)
DELETE FROM tech_authors;

Step 11: Delete the Entire Table (DROP TABLE)

To permanently remove the table structure and all its data, use the DROP TABLE DDL command.

-- Verify the table exists
\dt
-- Execute the permanent deletion
DROP TABLE tech_authors;
-- Verify the table is deleted
\dt

V. Security Note: Managing Docker Container Access

While Docker simplifies deployment, the access controls used (like the initial POSTGRES_PASSWORD=pass123) are insecure for production or complex staging environments. In enterprise and regulated environments, direct credential exposure and local password management pose a major security risk, especially for DDL commands like DROP TABLE.

Tools like StrongDM provide an essential layer of security by acting as an Identity-Aware Control Plane for your containerized databases. This ensures:

  • Credential Elimination: Developers never see or handle the actual database passwords.
  • Just-in-Time Access: Temporary, verified access is granted based on user identity, eliminating "standing access."
  • Complete Audit: Every CREATE, UPDATE, and DROP command executed within the container is logged to the verified user, ensuring compliance and forensic capability.

Frequently Asked Questions (FAQ) on PostgreSQL Docker Operations

Q: What is the difference between DELETE FROM table and TRUNCATE TABLE?

A: DELETE FROM table removes rows one by one (logging the operation) and can be rolled back. TRUNCATE TABLE is a non-logged operation that removes all rows very quickly and resets sequence counters, but it cannot be rolled back and is therefore much faster for emptying large tables.

Q: Is it safe to use the default 'postgres' user for daily work inside the container?

A: It is common for simple development/testing, but highly insecure for production. The postgres user is the superuser with unlimited privileges. Best practice is to create dedicated, non-superuser roles for application and user access, even within a containerized environment.

Q: Why should I map port 5432 in the 'docker run' command?

A: Mapping port -p 5432:5432 exposes the container's internal PostgreSQL port (5432) to the host machine's port (5432). This allows you to connect to the database from external clients on your host machine (like pgAdmin, DBeaver, or an application running locally).

Posting Komentar

Lebih baru Lebih lama