Show diststyle of tables
SELECT relname, reldiststyle FROM pg_class WHERE relname='salary' OR relname='employee';
Details about distribution styles: http://docs.aws.amazon.com/redshift/latest/dg/viewing-distribution-styles.html
How to COPY multiple files into Redshift from S3
Could “Group” (or “Order”) by number, not column name
SELECT listing.sellerid, sum(sales.qtysold) FROM sales, listing WHERE sales.salesid = listing.listid AND listing.listtime > '2008-12-01' AND sales.saletime > '2008-12-01' GROUP BY 1 ORDER BY 1;
COPY with automatical compression
To apply automatic compression to an empty table, regardless of its current compression encodings, run the COPY command with the COMPUPDATE option set to ON. To disable automatic compression, run the COPY command with the COMPUPDATE option set to OFF.
Change diststyle of table
CREATE TALBLE userseven DISTSTYLE EVEN AS SELECT * FROM users;
Show storage space of columns
select col, max(blocknum) from stv_blocklist b, stv_tbl_perm p where (b.tbl=p.id) and name ='lineorder' and col < 17 group by name, col order by col;
Change current environment in SQL Editor
set query_group to test; set session authorization 'adminwlm'; set wlm_query_slot_count to 3; /* override current level */
Primary key and foreign key
Amazon Redshift does not enforce primary key and foreign key constraints, but the query optimizer uses them when it generates query plans. If you set primary keys and foreign keys, your application must maintain the validity of the keys.
Distribution info in EXPLAIN
No redistribution is required, because corresponding slices are collocated on the compute nodes. You will typically have only one DS_DIST_NONE step, the join between the fact table and one dimension table.
No redistribution is required, because the inner join table used DISTSTYLE ALL. The entire table is located on every node.
The inner table is redistributed.
The outer table is redistributed.
A copy of the entire inner table is broadcast to all the compute nodes.
The entire inner table is redistributed to a single slice because the outer table uses DISTSTYLE ALL.
Both tables are redistributed.
create table likesales (like sales); insert into likesales (select * from sales); drop table sales; alter table likesales rename to sales;
select tbl as tbl_id, stv_tbl_perm.name as table_name, col, interleaved_skew, last_reindex from svv_interleaved_columns, stv_tbl_perm where svv_interleaved_columns.tbl = stv_tbl_perm.id and interleaved_skew is not null;
The value for interleaved_skew is a ratio that indicates the amount of skew. A value of 1 means there is no skew. If the skew is greater than 1.4, a VACUUM REINDEX will usually improve performance unless the skew is inherent in the underlying set.
About interleaved sort key: http://docs.aws.amazon.com/redshift/latest/dg/t_Sorting_data.html#t_Sorting_data-interleaved
Concurrent write operations are supported in Amazon Redshift in a protective way, using write locks on tables and the principle of serializable isolation.
unload ('select * from venue order by venueid') to 's3://mybucket/tickit/venue/reload_' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' manifest delimiter '|'; truncate venue; copy venue from 's3://mybucket/tickit/venue/reload_manifest' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' manifest delimiter '|';
In addition to the Python Standard Library, the following modules are part of the Amazon Redshift implementation:
* numpy 1.8.2
* pandas 0.14.1
* python-dateutil 2.2
* pytz 2015.7
* scipy 0.12.1
* six 1.3.0
* wsgiref 0.1.2
CREATE FUNCTION f_within_range (x1 float, y1 float, x2 float, y2 float) RETURNS bool IMMUTABLE as $$ def distance(x1, y1, x2, y2): import math return math.sqrt((y2 - y1) ** 2 + (x2 - x1) ** 2) return distance(x1, y1, x2, y2) < 20 $$ LANGUAGE plpythonu;
* Nested Loop : The least optimal join, a nested loop is used mainly for cross-joins (Cartesian products) and some inequality joins.
* Hash Join and Hash Typically faster than a nested loop join, a hash join and hash are used for inner joins and left and right outer joins. These operators are used when joining tables where the join columns are not both distribution keys and sort keys. The hash operator creates the hash table for the inner table in the join; the hash join operator reads the outer table, hashes the joining column, and finds matches in the inner hash table.
* Merge Join Typically the fastest join, a merge join is used for inner joins and outer joins. The merge join is not used for full joins. This operator is used when joining tables where the join columns are both distribution keys and sort keys, and when less than 20 percent of the joining tables are unsorted. It reads two sorted tables in order and finds the matching rows. To view the percent of unsorted rows, query the SVV_TABLE_INFO (p. 786) system table.
You can temporarily override the amount of memory assigned to a query by setting the wlm_query_slot_count parameter to specify the number of slots allocated to the query. By default, WLM queues have a concurrency level of 5
A VARCHAR(12) column can contain 12 single-byte characters, 6 two-byte characters, 4 three- byte characters, or 3 four-byte characters.
Tuple in Redshift SQL
select * from venue where (venuecity, venuestate) in (('Miami', 'FL'), ('Tampa', 'FL')) order by venueid;
SELECT gender, COUNT(gender) FROM employee WHERE first_name SIMILAR TO '%ein%' GROUP BY gender;
To reduce processing time and improve overall system performance, Amazon Redshift skips analyzing a table if the percentage of rows that have changed since the last ANALYZE command run is lower than the analyze threshold specified by the analyze_threshold_percent parameter. By default, analyze_threshold_percent is 10
COPY from DynamoDB
Setting READRATIO to 100 or higher will enable Amazon Redshift to consume the entirety of the DynamoDB table's provisioned throughput, which will seriously degrade the performance of concurrent read operations against the same table during the COPY session. Write traffic will be unaffected.
Different databases in Redshift
After you have created the TICKIT database, you can connect to the new database from your SQL client. Use the same connection parameters as you used for your current connection, but change the database name to tickit.
Interleaved Sort Key
A maximum of eight columns can be specified for an interleaved sort key.
Concatenate in SQL
insert into t1(col1) values('Incomplete'::char(3));
INSERT INTO from SELECT
INSERT INTO hello ( SELECT employee_id, first_name FROM employee ORDER BY 2)
Prepare and execute PLAN
DROP TABLE IF EXISTS prep1; CREATE TABLE prep1 (c1 int, c2 char(20)); PREPARE prep_insert_plan (int, char) AS insert into prep1 values ($1, $2); EXECUTE prep_insert_plan (1, 'one'); EXECUTE prep_insert_plan (2, 'two'); EXECUTE prep_insert_plan (3, 'three'); DEALLOCATE prep_insert_plan;
Powerful 'WITH' for sub-query in SQL
WITH workage AS ( SELECT employee_id, datediff(day, birthday, work_day)/365 AS work_age FROM employee) SELECT COUNT(employee_id), work_age FROM workage GROUP BY work_age ORDER BY 1 DESC;
UNLOAD with compression
unload ('select * from employee') TO 's3://robin-data-023/employee_' iam_role 'arn:aws:iam::589631040421:role/fullRedshift' gzip;
VACUUM FULL salary TO 100 PERCENT;
'OVER' in SQL
SELECT *, 100*salary/FIRST_VALUE(salary) OVER (PARTITION BY employee_id ORDER BY salary DESC ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS most_salary_percent FROM salary WHERE employee_id = 10001 OR employee_id = 10002;
Show and set current settings
select current_setting('query_group'); select set_config('query_group', 'test', true);
Show blocks(1MB) allocated to each column in the 'salary' table
select col, count(*) from stv_blocklist, stv_tbl_perm where stv_blocklist.tbl = stv_tbl_perm.id and stv_blocklist.slice = stv_tbl_perm.slice and stv_tbl_perm.name = 'salary' group by col order by col;
Slice and Col
slice: Node slice
col: Every table you create has three hidden columns appended to it: INSERT_XID, DELETE_XID, ROW_ID
In STV_SLICES , we can see relations between slice and node. Single node have two slices: 0 and 1
Common used tables for meta information