Tuesday, August 16, 2016

How to Extract long data to Clob?

until to oracle 11G, oracle dictionary still includes some long data type, I try to use to_lob/Dbms_Lob.substr, it always raise oracle error "ORA-00932: inconsistent datatypes: expected - got LONG", so search some documents and give below function to extract the long data to clob.

select to_char(LONG2CLOB('COLUMN_EXPRESSION','ALL_IND_EXPRESSIONS','table_name=upper(''***'')')) as
from dual;


CREATE OR REPLACE FUNCTION LONG2CLOB (
   colName   IN VARCHAR2,
   tabName   IN VARCHAR2,
   whereCond IN VARCHAR2 := NULL)
RETURN CLOB
IS
   v_cur            PLS_INTEGER := DBMS_SQL.OPEN_CURSOR;
   v_rows           PLS_INTEGER;
   v_buflen         PLS_INTEGER := 32760;
   v_curpos         NUMBER := 0;
   v_long_len       NUMBER;
   v_long_val       CLOB;
   v_return_val     CLOB;
  BEGIN
   /* dynamic to execute the SQL */
   DBMS_SQL.PARSE (
      v_cur,
      'SELECT ' || colName ||
      '  FROM ' || tabName ||
      ' WHERE ' || NVL (whereCond, '1 = 1'),
      DBMS_SQL.NATIVE);

   /* Define the long column and then execute and fetch... */
   DBMS_SQL.DEFINE_COLUMN_LONG (v_cur, 1);
   v_rows := DBMS_SQL.EXECUTE (v_cur);
   v_rows := DBMS_SQL.FETCH_ROWS (v_cur);

   /* If a row was fetched, loop through the long value until
   || all pieces are retrieved.
   */
   IF v_rows> 0
   THEN
      LOOP
         DBMS_SQL.column_value_long (v_cur,
                                     1,
                                     v_buflen,
                                     v_curpos,
                                     v_long_val,
                                     v_long_len);
       
       
         EXIT WHEN v_long_len = 0;
         v_return_val := v_return_val || v_long_val;
         v_curpos := v_curpos + v_long_len;
      END LOOP;
   END IF;
   DBMS_SQL.CLOSE_CURSOR (v_cur);
   RETURN v_return_val;
EXCEPTION
   WHEN OTHERS
   THEN
      IF DBMS_SQL.is_open (v_cur)
      THEN
         DBMS_SQL.close_cursor (v_cur);
      END IF;
      RAISE;
END LONG2CLOB;
/

Monday, August 15, 2016

Redis Vs. Memcached



1.Comparison
Name
Memcached  
Redis  
Description
In-memory key-value store, originally intended for caching
In-memory data structure store, used as database, cache and message broker 
Ranking
Rank 22 
Score 32.42 
Rank 10 
Score 100.65 
Database Model
Website
Technical Documentation
Developer
Danga Interactive 
Salvatore Sanfilippo 
Initial release
2003
2009
Current release
1.4.24, April 2015
3.0.3, June 2015
License
Open Source 
Open Source 
Database as a Service (DBaaS) 
No
No
Implementation language
C
C
Server operating systems
FreeBSD
Linux
OS X
Unix
Windows
BSD
Linux
OS X
Windows 
Data Scheme
Schema-free
Schema-free
Data Types
1)Limits key names to 250 bytes, limits values to 1MB 
2)Works only with plain strings 
1)Key names and values to be as large as 512MB each,they are binary safe. 
2)Has six data types that enable more intelligent caching and manipulation of cached data.   
XML support 
No
No
Secondary indexes
No
No
SQL
No
No
APIs and other access methods
Proprietary protocol
Proprietary protocol 
Supported programming languages
.Net
C
C++
ColdFusion
Erlang
Java
Lisp
Lua
OCaml
Perl
PHP
Python
Ruby
C
C#
C++
Clojure
Crystal
D
Dart
Elixir
Erlang
Fancy
Go
Haskell
Haxe
Java
JavaScript (Node.js)
Lisp
Lua
MatLab
Objective-C
OCaml 
Perl
PHP
Prolog
Pure Data
Python
R
Rebol
Ruby
Rust
Scala
Scheme
Smalltalk
Tcl
Server-side scripts and Commands 
No
Lua Script and 160-Plus Commands
Triggers
No
No
Partitioning Methods 
Client/Proxy Sharding
Client/Proxy Sharding and Redis Cluster 
Replication Methods 
No
Master-Slave replication 
MapReduce
No
No
Consistency concepts 
No
Eventual Consistency
Foreign keys 
No
No
Transaction concepts 
No
Optimistic locking, atomic execution of commands blocks and scripts
Concurrency 
Yes
Yes 
Persistence
No
Yes, Can Disable
Eviction policies 
1)noeviction 
2)allkeys-lru: 
3)volatile-lru 
4)allkeys-random 
5)volatile-random 
6)volatile-ttl 
http://redis.io/topics/lru-cache 
Timeout keys 
1)Explicit expiration times. 
2)Delete cached objects when we know we're doing something  
that will cause them to be invalid but are not in a position to update them while we're at it. 
3)Include timestamps on cached objects and do our own expiries based on dependencies. 
[https://www.mediawiki.org/wiki/Memcached 
]
1)EXPIRE 
2)EXPIREAT 
3)TTL 
4)PERSIST 
5)SETEX 
In-memory capabilities 
Yes
Yes
Performance
good
Very good
Security
http://dustin.sallings.org/2010/08/08/memcached-security.html\\http://redis.io/topics/security\\
2.Reference

Sunday, August 7, 2016

Upgrade Common DBCP JDBC Pool to Tomcat JDBC Pool Comments

Today Tomcat provided one better JDBC Pool,so most of application want to upgrade your-self program to use Tomcat JDBC Pool, today I'll introduce some differences between Tomcat JDBC Pool and Common DBCP JDBC Pool.

firstly, I'd like introduce some key new features at Tomcat JDBC Pool.

  • validationInterval - in addition to running validations on connections, avoid running them too frequently, it very useful to downgrade the validation frequently on database.
  • maxAge,(long) Time in milliseconds to keep this connection. When a connection is returned to the pool, the pool will check to see if the now - time-when-connected > maxAge has been reached, and if so, it closes the connection rather than returning it to the pool. 
  • jdbcInterceptors,flexible and pluggable interceptors to create any customizations around the pool, the query execution and the result set handling.
  • fairQueue,Set the fair flag to true to achieve thread fairness or to use asynchronous connection retrieval.
secondly,I'd like introduce some key difference parameters name for each other.





the end, I'd like give you one standard configuration for Tomcat JDBC Pool.

<Resource name="jdbc/my_testing_DB"
          auth="Container"
          type="javax.sql.DataSource"
          factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
          testOnBorrow="true"
          testOnReturn="false"
          validationQuery="SELECT 1 from dual"
          validationInterval="1000"
          timeBetweenEvictionRunsMillis="300000"
          maxActive="50"
          minIdle="10"
          maxWait="10000"
          initialSize="10"
          username="bossrpt"
          password="pass"
          driverClassName="com.inet.ora.OraDriver"
          url="jdbc:inetora:***?service=***;failover=true;host1=***;queryTimeout=600"
/>

Reference:
https://commons.apache.org/proper/commons-dbcp/configuration.html
https://tomcat.apache.org/tomcat-8.0-doc/jndi-datasource-examples-howto.html

What happened on "java.net.SocketException: Socket closed"?

Our Application have error when connect the database by Tomcat JBDC Pool, the error description like this "com.inet.ora.Ora4SQLException: [OraDriver] #69 java.net.SocketException: Socket closed", we don't kill the connection from database side, why the connection closed?

firstly, I check firewall and Linux Socket setup whether have error,common oracle SQLPLUS can work on this, so I think Linux and firewall setup is right.

secondly, I check the Java JDBC setup whether have error,so I write one java code with JDBC setup to check whether can reproduce the error, I got the same error.

thirdly, I check the JDBC configuration, I found the root cause, because we setup the query time-out time is 600(10 minutes) but we still setup the removeAbandoned="true" and removeAbandonedTimeout="60",setup wrong on this, why?

Let's introduce the detail reason to you.
removeAbandoned is (boolean) Flag to remove abandoned connections if they exceed the removeAbandonedTimeout. If set to true a connection is considered abandoned and eligible for removal if it has been in use longer than the removeAbandonedTimeout Setting this to true can recover db connections from applications that fail to close a connection. See also logAbandoned The default value is false.

removeAbandonedTimeoutremoveAbandonedTimeout is (int) Timeout in seconds before an abandoned(in use) connection can be removed. The default value is 60 (60 seconds). The value should be set to the longest running query your applications might have.

logAbandoned is logAbandoned(boolean) Flag to log stack traces for application code which abandoned a Connection. Logging of abandoned Connections adds overhead for every Connection borrow because a stack trace has to be generated. The default value is false.

because Tomcat JDBC pool will remove abandoned connections if they exceed the removeAbandonedTimeout, we setup the query time-out is 10 minutes and removeAbandonedTimeout=60(seconds), so Tomcat JDBC Pool remove abandoned connection by 60 seconds, but this connection still query the data, because our report database have many data on this, it always more than 60 seconds to get the result. 


Java testing codes for Tomcat JDBC Pool:

import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;

import org.apache.tomcat.jdbc.pool.DataSourceFactory;
import org.apache.tomcat.jdbc.pool.DataSource;
import org.apache.tomcat.jdbc.pool.PoolProperties;

public class TestJDBC {
private static String driver = "com.inet.ora.OraDriver";
private static String url = "jdbc:inetora:***?service=***&failover=true&host1=**&port1=1521&loginTimeout=10&queryTimeout=600";
private static String userName = "***";
private static String password = "pass";

private static DataSource ds = null;

public static void main(String[] args) throws Exception {
initDataSource();

String sql = "INSERT INTO ......";

Connection con = getConnection();
PreparedStatement pstmt = con.prepareStatement(sql);

long beginM = System.currentTimeMillis();
System.out.println("Begin:" + beginM);

try {

pstmt.execute();

} catch (Exception e) {
e.printStackTrace();
} finally {
closeCon(null, pstmt, con);
}

long endM = System.currentTimeMillis();
System.out.println("End:" + endM);
System.out.println("Total:" + (endM - beginM));
}


public static void initDataSource() throws Exception {
if (ds == null) {
System.out.println("???????:[driver:" + driver + ",url:" + url
+ ",userName:" + userName + ",password:" + password + "]");

PoolProperties p = new PoolProperties();
p.setDriverClassName(driver);
p.setUrl(url);
p.setUsername(userName);
p.setPassword(password);
p.setMaxActive(50);
p.setInitialSize(10);
p.setMaxIdle(10);
p.setMaxWait(10000);
p.setConnectionProperties("streamstolob=true");
p.setTestOnBorrow(true);
p.setTimeBetweenEvictionRunsMillis(300000);
p.setRemoveAbandonedTimeout(60);
p.setRemoveAbandoned(true);
p.setValidationQuery("SELECT instance_name FROM v$instance");
p.setLogAbandoned(true);
p.setValidationInterval(1000);

ds = new DataSource();
ds.setPoolProperties(p);
}

}

public static void shutDownDataSource() throws Exception {
if (ds != null) {
ds.close();
}
}

public static Connection getConnection() {
Connection con = null;
try {
con = ds.getConnection();
} catch (Exception e) {
e.printStackTrace();
}
return con;
}

public static void closeCon(ResultSet rs, PreparedStatement ps,
Connection con) {
if (rs != null) {
try {
rs.close();
} catch (Exception e) {
e.printStackTrace();
}
}
if (ps != null) {
try {
ps.close();
} catch (Exception e) {
e.printStackTrace();
}
}
if (con != null) {
try {
con.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
}

Monday, August 1, 2016

How to use Redis(master-salve) as Web Cache?

Nowadays,Most Application use Redis as cache Service to improve user experience, now I will introduce how to use Redis(master-slave-sentinel) as web cache.

Firstly,
I'd like to introduce the application how to use the Redis(master-slave-sentinel), App will talk with Redis Sentinel and master/slave transient to application, so when have Redis master switch to slave, Application still work well,below is detail Sequence diagram.



















Secondly,
I'd like to introduce how to give one good Redis design to support business extends, I involved one concept "redis pool", that's includes "master-slave-sentinel", it maybe 1:1:3 or 1:2:3, so one pool can support one data centers all business or have several pools to service one data center business; anther is to ensure cache more simple, don't do data center redis pool replication, below is detail logic deployment Architecture.



Thirdly,
I'd like to introduce how to monitor the Redis health state, now we use python plug-in on collect to monitor the redis, below is detail work flow.





Huge Page Vs. Transparent Huge Page


Starting with RedHat6, RedHat7, OL6, OL7 SLES11 and UEK2 kernels, Transparent Huge Page are implemented and enabled (default) in an attempt to improve the memory management.  
Transparent Huge Page are similar to the Huge Page that have been available in previous Linux releases. The main difference is that the Transparent Huge Page are set up dynamically at run time by the khugepaged thread in kernel while the Huge Page(regular) had to be pre-allocated at the boot up time.

So once you setup Huge Page with Transparent for Oracle Database, it will downgrade the database performance suddenly,therefore we need disable the Transparent Huge Page, below is detail steps to disable Transparent Huge Page.

Add the following lines in /etc/rc.local and reboot the server (this still can be done on Redhat 7 although rc.local is being deprecated):


if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
   echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
   echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi

Notes:
most of guys disable the Transparent Huge Page, but reboot's server still lost setup.

Reference:
Oracle Doc 1557478.1

How to Calculate Oracle Page Table Used Memory on Linux 64(bit)?

As DBA, you must know the concept of page table and huge page, because database's physical memory is huge and if you use common page(4K) to manage the memory, it will exhaust many physical memory on page table, Let's me introduce what's the problem on page table with common page(4K).

Firstly, I'd like introduce Linux x86_64 use 4 level page tables architecture.















Secondly,we know Linux x86_64 use 4 level page tables design and builds with PGD(9 bit),PUD(9 bit),PMD(9 bit),PTE (9 bit), page (12 bit), total 8 bytes(48 bits+16 bit unused).

thirdly, assume one Oracle Node Pre-Malloc physical memory is 96G,so used PTE(Page Table Entry) equals to 96(G)*1024*1024/4K =25,165,824,one PTE entry need 9 bits to descripte it,
So, that’s 27MB((25,165,824*9)/8/1024/1024=27M) of page table entries.

finally, assume one Oracle Node max support delicate sessions is 4000, that's meaning it have about 2675 oracle processes and each process which attaches to the SGA needs it's own copy of the page table, therefore it need about 70.5G(2675*27M) physical memory, that's why we need huge page to downgrade the page table used memory.

reference
https://lwn.net/Articles/106177/
https://docs.oracle.com/cd/E18283_01/server.112/e17110/initparams229.htm