PostgreSQL architecture and briefly explanation with diagrams

PostgreSQL instance consists of set of Process and Memory. PostgreSQL uses a simple “process per-user” client/server model. The major processes are:

  1. The ‘postmaster’ which is:
    • supervisory daemon process,
    • ‘postmaster’ is attached to shmmem segment but refrains from accessing to it.
    • always running waiting for connection requests
  2. Utility processes ( BGwriter, WALwriter, Syslogger, Archiver, Statscollector , Autovacuum launcher,WAL sender and WAL receiver )
  3. User Backend process ( postgres process itself, Server Process )

1)postmaster:-
When a client request for connection to the database, firstly request is hit to Postmasterdaemon process. After performing authentication and authorization it forks one new backend server process (postgres). Henceforth, the frontend process and the backend server communicate directly without intervention by the postmaster. The postmaster is always running, waiting for connection requests, whereas frontend and backend processes come and go. The libpq library allows a single frontend to make multiple connections to backend processes.

However, each backend process is a single-threaded process that can only execute one query at a time; so the communication over any one frontend-to-backend connection is single-threaded.
Postmaster and postgres servers run with the user ID of the PostgreSQL “superuser”.

One postgres process exists for every open database session. Once authenticated with user connection, it directly connects (with who and for what purpose) with shared memory.

2)User Backend process:-
POSTGRESQL SHARED MEMORY
Shared Buffers:
Sets the amount of memory the database server uses for shared memory buffers. The default is typically 32MB. Larger settings for shared_buffers usually require a corresponding increase in checkpoint_segments, in order to spread out the process of writing large quantities of new or changed data over a longer period of time.

Below 3 parameters should be discussed:
bgwriter_delay
bgwriter_lru_maxpages
bgwriter_lru_multiplier

WAL Buffers:
The amount of shared memory used for WAL data that has not yet been written to disk. The default setting of -1 selects a size equal to 1/32nd (about 3%) of shared_buffers, but not less than 64kB nor more than the size of one WAL segment, typically 16MB. This value can be set manually if the automatic choice is too large or too small, but any positive value less than 32kB will be treated as 32kB. This parameter can only be set at server start. The contents of the WAL buffers are written out to disk at every transaction commit, so extremely large values are unlikely to provide a significant benefit. However, setting this value to at least a few megabytes can improve write performance on a busy server where many clients are committing at once. The auto-tuning selected by the default setting of -1 should give reasonable results in most cases.

CLOG Buffers:
$PGDATA/pg_clog contains a log of transaction metadata. This log tells PostgreSQL which transactions completed and which did not. The clog is small and never has any reason to become bloated, so you should never have any reason to touch it.

POSTGRESQL PER BACKEND MEMORY
work_mem:
Specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. Default is 1M. Note that for a complex query, several sort or hash operations might be running in parallel; each operation will be allowed to use as much memory as this value specifies before it starts to write data into temporary files. Also, several running sessions could be doing such operations concurrently. Therefore, the total memory used could be many times the value of work_mem; it is necessary to keep this fact in mind when choosing the value.

temp_buffers:
Sets the maximum number of temporary buffers used by each database session. Default is 8M. The setting can be changed within individual sessions, but only before the first use of temporary tables within the session; subsequent attempts to change the value will have no effect on that session.

maintenance_work_mem:
Specifies the maximum amount of memory to be used by maintenance operations, such as VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY. Default is 16M. Since only one of these operations can be executed at a time by a database session, and an installation normally doesn’t have many of them running concurrently, it’s safe to set this value significantly larger than work_mem.


3)Utility processes:-
Every PostgreSQL Instance startup, there will be a set of utilty process(including mandatory and optional process) and memory.
Two mandatory process (BGWRITER and WAL WRITER-We cannot Enable/Disable these processes.) and four optional process (Autovacuum launcher,stats collector,syslogger,Archiver,WAL sender and WAL receiver). You can check it out with the command ‘ps -ef | grep postgres’

BGWriter/Writer Process:
BGWRITER or WRITER process is a mandotary process.
WRITER — process is responsible to write the dirty buffers to data files
BGWRITER spends much of its time sleeping, but every time it wakes, it searches through the shared buffer pool looking for modified pages After each search, the BGWRITER chooses some number of modified pages, writes them to disk, and evicts those pages from the shared buffer pool. BGWRITER process can be controled with three parameters BGWRITER_DELAY,BGWRITER_LRU_PERCENT and BGWRITER_LRU_MAXPAGES.

WAL Writer Process:
WAL writer process is a mandatory process.
WAL WRITER — is for writing the dirty buffers in WAL buffers to WAL files.
WAL buffers holds the changes made to the database in the transaction logs as WAL writer process is responsible to write on to the disk. WAL_WRITER_DELAY parameter for invoking the WAL Writer Process

Stats Collector Process:
Stats collecotr process is optional process, default is ON.
STATS COLLECTOR — process to collect the statistics of objects in the database require by Optimizer to improve the performance
It count number of access to the tables and indexes in both disk-block and individual row items. It also tracks the total number of rows in each table, and information about VACUUMand ANALYZE actions for each table. Collection of statistics adds some overhead to query execution, whether to collect or not collect information. Some of the parameter in the postgresql.conf file will control the collection activity of the stats collector process

some important parameters of Stats Collector:
the system can be configured to collect or not collect information. This is controlled by configuration parameters that are normally set in postgresql.conf
The parameter track_counts controls whether statistics are collected about table and index accesses.
The parameter track_functions enables tracking of usage of user-defined functions.
The parameter track_activities enables monitoring of the current command being executed by any server process.

Normally these parameters are set in postgresql.conf so that they apply to all server processes, but it is possible to turn them on or off in individual sessions using the SET command. (To prevent ordinary users from hiding their activity from the administrator, only superusers are allowed to change these parameters with SET.)
The statistics collector communicates with the backends needing information (including autovacuum) through temporary files. These files are stored in the pg_stat_tmp subdirectory. When the postmaster shuts down, a permanent copy of the statistics data is stored in the global subdirectory. For increased performance, the parameter stats_temp_directory can be pointed at a RAM-based file system, decreasing physical I/O requirements.
pg_stat_database,pg_stat_activity,pg_stat_all_tables…–>views of Stats Collector  collected informations.

Autovacuum Launcher Process:
Autovacuuming is a optional Process, default is ON.
For automating the execution of VACUUM and ANALYZE command, Autovacuum Launcher is a daemon process consists of multiple processes called autovacuum workers. Autovacuum launcher is a charge of starting autovacuum worker processes for all databases. Launcher will distribute the work across time, attempting to start one worker on each database for every interval, set by the parameter autovacuum_naptime. One worker will be launched for each database, set by the parameter autovacuum_max_workers. Each worker process will check each table within its database and execute VACUUM or ANALYZE as needed
some important parameters of Autovacuum:
autovacuum (boolean)–>Controls whether the server should run the autovacuum launcher daemon. This is on by default
autovacuum_max_workers (integer)–>Specifies the maximum number of autovacuum processes
autovacuum_naptime (integer)–>Specifies the minimum delay between autovacuum runs on any given database default is one minute (1m)
autovacuum_vacuum_threshold (integer)–>Specifies the minimum number of updated or deleted tuples needed to trigger a VACUUM in any one table. The default is 50 tuples
autovacuum_analyze_threshold (integer)–>Specifies the minimum number of inserted, updated or deleted tuples needed to trigger an ANALYZE in any one table. The default is 50 tuples
log_autovacuum_min_duration (integer)–>Minus-one (the default) Causes each action executed by autovacuum to be logged if it ran for at least the specified number of milliseconds


Syslogger Process / Logger Process:
Logging is an optional process, default is OFF.
all the utility process + User backends + Postmaster Daemon attached to syslogger process for logging the information about their activities. Every process information is logged under $PGDATA/pg_log with the file .log. Note: If the data directory is created with INITDBcommand, then there wont be pg_log directory under it. Explicit creation is needed.

some important parameters of sysLogger :
log_destination (string)—>methods for logging server messages, including stderr, csvlog and syslog
logging_collector (boolean)–>This parameter allows messages sent to stderr,
log_directory (string)–>When logging_collector is enabled, this parameter determines the directory in which log files will be created
log_filename (string)–>When logging_collector is enabled, this parameter sets the file names of the created log files
log_rotation_size (integer)–>When logging_collector is enabled, this parameter determines the maximum size of an individual log file
log_rotation_age (integer)–>When logging_collector is enabled, this parameter determines the maximum lifetime of an individual log file
log_truncate_on_rotation (boolean)–>When logging_collector is enabled, this parameter will cause PostgreSQL to truncate (overwrite), rather than append to, any existing log file of the same name

Archiver Process:
Achiver process is optional process, default is OFF.
Setting up the database in Archive mode means, to capture the WAL data of each segment file once it is filled, and save that data somewhere before the segment file is recycled for reuse.


archiver  working process:-
1. On Database Archivelog mode, once the WAL data is filled in the WAL Segment, that filled segment named file is created under $PGDATA/pg_xlog/archive_status by the WAL Writer naming the file as “.ready”. File naming will be “segment-filename.ready”.
2. Archiver Process triggers on finding the files which are in “.ready” state created by the WAL Writer process. Archiver process picks the ‘segment-file_number’ of .ready file and copies the file from $PGDATA/pg_xlog location to its concerned Archive destination given in ‘archive_command’ parameter(postgresql.conf).
3. On successful completion of copy from source to destination, archiver process renames the “segment-filename.ready” to “segment-filename.done”. This completes the archiving process.
It is understood that, if any files named “segement-filename.ready” found in $PGDATA/pg_xlog/archive_status are the pending files still to be copied to Archive destination.

some important parameters of archiver:
archive_command = ‘test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f’  # Unix
archive_command = ‘copy “%p” “C:\\server\\archivedir\\%f”‘  # Windows
Compressed Archive Log
archive_command = ‘pg_compresslog %p – | gzip > /var/lib/pgsql/archive/%f’Compressed Archive Logs
pg_decompresslog during recovery
restore_command = ‘gunzip < /mnt/server/archivedir/%f | pg_decompresslog – %p’


4)overview of PostgreSQL
Database file layout

  1.  PGDATA – base directory for the Database Server: traditionally it contains configuration and data files + data directory
  2.  example location: /var/lib/pgsql/data
  3.  Multiple clusters, managed by different server instances, can exist on the same machine
  4.  configuration files and pid file location can be configured any where, it can reside under PGDATA also

base subdirectory 

  1.  contains the user database files
  2.  subdirectory names are the database OIDs

Data Pages 

  1.  pages are located under the database subdirectories
  2.  page default size: 8k
  3.  additional sizes:4k and 16k but needs compilation of postgresql
  4.  for general purpose 8k is best practice

user accessible files

  1.  PGVERSION: major version number of installation
  2.  postgresql.conf: main configuration file for PostgreSQL installation
  3.  pg_hba.conf: configures the client authentication method
  4.  pg_ident.conf: configures OS and PostgreSQL authentication name mapping
  5.  postmaster.opts: default command line options for the postmaster
  6.  postmaster.pid: PID of the postmaster and identification the main directory

Blogger Won’t Save My Template After Changes

blogs Errors :
1.More than one widget was found with id
2.when i enter google adsense gadget in my blogger it give me Please correct the errors on this form. how can i solve this problem
3.Blogger won’t save my template changes

Solution

Error message is same , the different only at ID, that is HTML2 and Feed1. And that is just an example, ID which emerge possibly HTML2, HTML3, HTML4 etc, or possibly also Feed2, Feed3, Feed4 etc. Possibly you will feel to confuse with this error message, because you can’t edit your template code. The question possibly emerge that is what do cause can emerge the error message like that? if I may answer possible it be like this : error message emerge because in your template code is have two widgets or more with same ID. Still confuse? I give an example. For example in your template code have widget code be like this :

<b:widget id='HTML2‘ locked=’false’ title=” type=’HTML’/>
<b:widget id=''HTML2‘ locked=’false’ title=’Free Download’ type=’HTML’/>
At the example above seen two widget codes with the same ID, that is HTML2 (attention the red letter):
<b:widget id='HTML2‘ locked=’false’ title=” type=’HTML’/>
<b:widget id='HTML2‘ locked=’false’ title=’Free Download’ type=’HTML’/>
If the case such as the example, of course you can’t Saving your template code because it will always error. What must be doing if the case like that? you must change one of widget ID with diffrent ID. Example : change HTML2 with HTML14 or other number, but the number must be different with other widget ID.

<b:widget id='HTML2‘ locked=’false’ title=” type=’HTML’/>
<b:widget id='HTML2‘ locked=’false’ title=’Free Download’ type=’HTML’/>
You just change one of widget ID with different number, example : HTML2 changed with HTML14, for example :

<b:widget id='HTML1‘ locked=’false’ title=” type=’HTML’/>
<b:widget id='HTML14‘ locked=’false’ title=’Free Download’ type=’HTML’/>

If you had changing your widget ID, click Save Template button. And you will never se the error message



after that u delete unwanted widget if blogs is need
if problem is not solve
go to template just click Template—>Customise—>advanced–>Clear advanced changes to page text



More than One Widget Was Found With Id – Blogger Errors And Solution

blogs Error :More than one widget was found with id
Solution
Error message is same , the different only at ID, that is HTML2 and Feed1. And that is just an example, ID which emerge possibly HTML2, HTML3, HTML4 etc, or possibly also Feed2, Feed3, Feed4 etc. Possibly you will feel to confuse with this error message, because you can’t edit your template code. The question possibly emerge that is what do cause can emerge the error message like that? if I may answer possible it be like this : error message emerge because in your template code is have two widgets or more with same ID. Still confuse? I give an example. For example in your template code have widget code be like this :

<b:widget id='HTML2‘ locked=’false’ title=” type=’HTML’/>
<b:widget id=''HTML2‘ locked=’false’ title=’Free Download’ type=’HTML’/>
At the example above seen two widget codes with the same ID, that is HTML2 (attention the red letter):
<b:widget id='HTML2‘ locked=’false’ title=” type=’HTML’/>
<b:widget id='HTML2‘ locked=’false’ title=’Free Download’ type=’HTML’/>
If the case such as the example, of course you can’t Saving your template code because it will always error. What must be doing if the case like that? you must change one of widget ID with diffrent ID. Example : change HTML2 with HTML14 or other number, but the number must be different with other widget ID.

<b:widget id='HTML2‘ locked=’false’ title=” type=’HTML’/>
<b:widget id='HTML2‘ locked=’false’ title=’Free Download’ type=’HTML’/>
You just change one of widget ID with different number, example : HTML2 changed with HTML14, for example :

<b:widget id='HTML1‘ locked=’false’ title=” type=’HTML’/>
<b:widget id='HTML14‘ locked=’false’ title=’Free Download’ type=’HTML’/>

If you had changing your widget ID, click Save Template button. And you will never se the error message

Result Cache In Sga -Oracle Architecture

When a query is executed for the very first time, the user’s process searches for the data in the database buffer cache. If data is there, it uses it. otherwise, it performs an I/O operation to retrieve data from the datafile on disk into the buffer cache, and from this data, the final result set is displayed.
if another query requires the same data set, the process uses the data from the buffer cache to build the result set required by the user. only if it is present in buffer cache.
The Result Cache is an area in the shared pool and contains the end results of a query execution.
The Result Cache can be managed either on the client side or the server side.
Client side
Result Cache implementation would require the application to use the Oracle Call Interface (OCI) calls.
      
Server side
The query could be executed with a /*+ RESULT CACHE */ hint (or) the result_cache_mode parameter could be set to AUTO.
If result_cache_mode parameter is set AUTO, It moves all query results to the Result Cache section of the shared pool.
Cannot cache the result of:
When a query is executed for the very first time.
when the cache is flushed.

Pga Monitoring Views In Oracle

The PGA(program or process global area)is a memory area(ram) that stores data and control information for a single process.
It typically contais a sort_area,hash_area,session_cursor cache.
pga areas can be sized manually by setting parameters like hash_area_size,sort_area_size
Hash_Area_Size
Oracle hash_area_size is simple. The hash_area_size parameter value defaults to 1.5 times sort_area_size
and is used for performing hash joins of Oracle tables. The higher the value for hash_area_size, the higher
the propensity for the CBO to use a hash join.
The value for hash_area_size is quickly the oracle show parameters command
SQL > show parameter hash_area_size;
The hash_area_size is obsolete if you are using pga_aggregate_target,but in oracle 9i with
pga_aggregate_target,a hash area size cannot exceed 5% of the pga area many increase the
hash_area_size with “alter session get hash_area_size” or with a use_hash hint.
Sort_Area_Size
The sort_area_size parameters control the RAM size for dedicated tasks to sort SQL result sets and reduce
expensive disk sorts.
The usage for sort_area_size changed when pga_aggregate_target was introduced.
The sort_area_size is ignored when pga_aggregate_target is set and when workarea_size_policy =auto,
unless you are using a old feature such as the MTS (shared servers).
If dedicated server connections are used, the sort_area_size parameter is ignored unless you set
workarea_size_policy=manual.  You can still use sort_area_size at the session level, but there are special
tricks for sorting very large result sets in RAM because of the default governor on sort_area_size which
imposes a limit on the amount of sort area any session may acquire:
SQL > alter session set workarea_size_policy=manual;
SQL > alter session set sort_area_size=nnnn;
Bitmap_Merge_Area_Size
BITMAP_MERGE_AREA_SIZE is relevant only for systems containing bitmap indexes.
It specifies the amount of memory Oracle uses to merge bitmaps retrieved from a range scan of the index.
The default value is 1 MB.
A larger value usually improves performance, because the bitmap segments must be sorted before being
merged into a single bitmap.
SQL > show parameter bitmap_merge_area_size
Create_Bitmap_Area_Size
CREATE_BITMAP_AREA_SIZE is relevant only for systems containing bitmap indexes.
It specifies the amount of memory (in bytes) allocated for bitmap creation.
The default value is 8 MB.
A larger value may speed up index creation.
the number of unique values in a column in relation to the number of rows in the table.
If cardinality is very small, you can set a small value.
SQL > show parameter create_bitmap_area_size
Pga_Aggregate_Target
  • PGA_AGGREGATE_TARGET specifies the target aggregate PGA memory available to all server processes attached to the instance.
  • The default value for PGA_AGGREGATE_TARGET is non zero. Oracle sets it’s value to 20% of the
  • SGA or 10 MB, whichever is greater.
  • Setting PGA_AGGREGATE_TARGET to a nonzero value has the effect of automatically setting the
  • WORKAREA_SIZE_POLICY parameter to AUTO.
  • This means that SQL working areas used by memory-intensive SQL operators such as sort, group-by,
  • hash-join, bitmap merge, and bitmap create will be automatically sized.
  • In that case we don’t have to bother about settings of sort_area_size , hash_area_size etc.
  • If you set PGA_AGGREGATE_TARGET to 0 then oracle automatically sets the WORKAREA_SIZE_POLICY parameter to MANUAL.
  • This means that SQL workareas are sized using the *_AREA_SIZE parameters.
  • The memory allocated for PGA_AGGREGATE_TARGET has no relation with SGA_TARGET.
  • The similarity is both is taken from total memory of the system.
  • The minimum value of this parameter is 10 MB and maximum is 4096 GB – 1.
SQL >show parameter pga_aggregate_target
Workarea_Size_Policy
WORKAREA_SIZE_POLICY specifies the policy for sizing work areas. This parameter controls the
mode in which working areas are tuned.
AUTO
Work areas used by memory-intensive operators are sized automatically, based on the PGA memory used
by the system, the target PGA memory set in PGA_AGGREGATE_TARGET, and the requirement of
each individual operators
MANUAL
The sizing of work areas is manual and based on the values of the *_AREA_SIZE parameter
corresponding to the operation (for example, a sort uses SORT_AREA_SIZE). Specifying MANUAL
may result in sub-optimal performance and poor PGA memory utilization
SQL > show parameter workarea_size_policy
PGA usage statistics:
SQL > select * from v$pgastat;
Determine a good setting for pga_aggregate_target:
SQL > select * from v$pga_target_advice order by pga_target_for_estimate;
Show the maximum PGA usage per process:
SQL > select max(pga_used_mem), max(pga_alloc_mem), max(pga_max_mem) from v$process;

Storing And Reading In LOBS In Oracle

Lobs are a powerful way for storing, accessing, and maintaining large content in oracle database.
A LOB can be up to 128 terabytes or more in size depending on your block sizeStore both binary and  character content such as text, graphic images, still video clips, full motion video, and sound.
          
Type of LOBs
Internal LOB
Those stored in the database either in-line in the table or in a separate segment or tablespace.        
CLOBs-Character Large Object
BLOBs-Binary Large Object
NCLOBs-National Character Large Object
External LOB
Those stored as operating system files or outside of database.
BFILE-Binary file
            
CLOB
Stores string data in the database character set format. Used for large strings or documents that uses the database character set exclusively.
Characters in the database character set are in a fixed width format.
Examples
SQL> CREATE TABLE lob1 (col1 number, col2 clob);
Table created.
SQL> insert into lob1 values (1,234);
1 row created.
SQL> insert into lob1 values (2,’hi frnds how r u’);
1 row created.
SQL> select * from lob1;
 COL1   COL2
  ———- ——————————————————————————–
     1          234
     2          hi frnds how r u
BLOB
Stores any kind of data in binary format.
Typically used for multimedia data such as images, audio, and video.
Examples
SQL> CREATE TABLE lob2 (col1 number, col2 blob);
Table created.
SQL>  insert into lob2 values(12,utl_raw.cast_to_raw(‘/home/oracle/1.png’));
1 row created.
SQL>  select count(*) from lob2;
 COUNT(*)
———-
     1
NCLOB
It’s similar to a CLOB, but characters are stored in a NLS or multibyte national character set.
Example
SQL> create table nclob_1 (col1 number, col2 nclob);
Table created.
SQL> insert into nclob_1 values(1,’any nchar literal’);
1 row created.
BFILE
A binary file stored outside of the database in the host operating system file system, but accessible from database tables.
Bfiles can be accessed from your application on a read-only basis.
Use Bfiles to store static data, such as image data, that does not need to be manipulated in applications.
Examples
create directory dir_1 as ‘/home/oracle/mydir/’;
Directory created.
SQL> grant read,write on directory dir_1 to public;
Grant succeeded.
SQL> create table bfile_1 (col1 number,col2 bfile);
Table created.
SQL> insert into bfile_1 values (1, bfilename(‘dir_1′,’1.png’));
1 row created.

External Table In Oracle

External tables are created using the sql create table.organization table statement.
To load the bulk data using sql loader
Step 1:To create a file in os level
oracle@db ~]$ mkdir d1
oracle@db ~]$ cd d1
oracle@db d1]$ vi f1
oracle@db ~]$ export ORACLE_SID=
oracle@db ~]$ sqlplus / as sysdba
First we connect the ‘SYS’ user then we can create a new directory
Syntax:
Sql> conn sqlplus / as sysdba
You must have any directory system privilege to create directories.
Syntax:
Sql> grant create any directory to username;
Example:
Sql> grant create any directory to sai;
Syntax:
Sql> create directory as  ;
Example:
Sql> create directory d1 as  ‘/home/oracle/db/’;
Syntax:
Sql> grant read,write on directory directoryname to username;
Example:
Sql> grant read,write on directory d1 to nijam;
Create table:
Organization external table:usung type oracle loader
Default directory:
Access parameters( :
Records delimited by newline
Fields terminated by
Location :specifies the location of the external date,the location is specified as a list of directory objects and filenames.if the directory objects is not specified,then the default directory objects is used as the file location.
Example program:
Create table filename(study) (id number(5),name varchar2(15))
Organization external (type oracle_loader
Default directory directoryname
Access parameters(
Records delimited by newline
Fields terminated by ‘ , ‘)
Location (‘filename’));
SQL > select * from tablename;

Sub Queries In Oracle Database

Subquery or inner query or nested query is a query .sql subquery is usually added in the where clause of the sql statement.
The subquery can be nested inside a select,insert,update or delete statement or inside another query.
You can use the comparison operators such as  > , < , or = . the comparison operator can also be a multiple-row operator such as  IN , ANY , or ALL.
The inner query executes first before its parent query so that the result of inner query can be passed to the outer query.
Types Of Subqueries :-
Single row subquery
Multiple row subquery
Correlated subquery
SINGLE ROW SUBQUERY :-
Queries that return only one row from the inner select statement.
Select statement :
SQL > select * from tablename where columnname=(select columnname from tablename where columnname = ’value’);
Example :-
SQL > select * from t1 where id = (select id from t1 where salary =4500);
Insert statement :
Syntax :-
SQL > insert into tablename1 select * from tablename2 where columnname = (select columnname from tablename2 where columnname=’value’);
Example :-
SQL > insert into t1 select * from emp where empno = (select empno from emp where sal=800);
Update statement :
Syntax :-
SQL > update tablename1 set columnname=value where columnname = (select * from  tablename where columnname=value);
Example :-
SQL > update emp set sal=2000 where empno=(select * from t1where sal=800);
Delete statement :
Syntax :-
SQL >  delete from tablename where columnname =(select columnname from tablename where columnname=’value’);
Example :-
SQL > delete from emp where empno=(select empno from t1 where ename=’SMITH’);
MULTI ROW SUBQUERY
Queries that return more than one row from the Inner SELECT statement.
Types of operators
Operators   
                       Meaning
In                                
Any      

All      
Equal to any member in the list
Compare value to each value returned by the subquery
Compare value to every value returned by the subquery
                    
IN
SQL > select * form tablename where columnname in (select columnname from tablename where columnname=’values’);
Example :-
SQL > select * from emp where sal in (select sal from emp where deptno=10);
ANY
You can use the ANY operator to compare a value with any value in a list. You must place an =, , >, <, = operator before ANY in your query
Syntax :-
SQL > select * form tablename where columnname =any (select columnname from tablename where columnname=’values’);
Example :-
SQL > select * from emp where sal =any(select sal from emp where deptno=20);
ALL
Syntax :-
SQL > select * form tablename where columnname < all (select columnname from tablename where columnname=’values’);
Example :-
SQL > select * from emp where sal < all(select sal from emp where deptno=20);
Correlated subqueries
Correlated subqueries are used for row-by-row processing. Each subquery is executed once for every row of the outer query.
Syntax :-
SQL > select columnname,columnname from table1  where columnname operator (select columnname, from table2   where columnname=’values’);
Example :-
SQL > select a.name,a.salary from emp a where a.deptno =(select  deptno from dept b where b.deptno=10);