SSLCertificateChainFile

我使用StartCom提供的免费SSL证书已经好几年了,今天偶然用Firefox打开发现报错。

查了一圈,还是在stackoverflow找到了答案,添加SSLCertificateChainFile这个参数以后就恢复正常了。

期间发现一个SiteCheck工具,提供了证书过期的重要信息给我。

 

Published
Categorized as 技术

Practices of using MySQL and DBPool

给公司写的文档草稿,稍后会写成中文的。

Summary

In large cluster environment, it is always challenge of manage hundreds MySQL databases.

  • Planed down time of database server maintenance
  • Scalability
  • Change of data structures

Ideally, these all handled by proper development process and you have enough software engineers to support.
But in real world, it is more urgent of solving problems.

Here is the 9 best practices to operate thousands of app servers and databases and we don’t have any “user impact” because of database maintenance.

Database design:

1 Global design

We design our database schema like a one-node-cluster, we can run all the service in one box or 1000 boxes.
It is transparent to software engineers.

2 Simple only

Use only basic MySQL features: table, primary key, index, replication.

Application design:

3 Use app server’s CPU.

App servers are scalable, but MySQL is a bottle neck, Use as much as application CPU, which means:
Do not query without index
Do not sort using database
Do not use any query causes temporary tables

4 Use a abstraction layer of tables

We use DBPool for a long time

Ops workflow:

5 Vertical partitioning

When you want to move a few tables to a different master.
a. Setup new master (B) as the slave of the old master (A)
b. Change the DBPool configuration, point master to B.
c. In the middle of the time, (B) will also receive some update requests from old client.
d. Make sure it is no any queries to tables on B
e. Stop the replication and optionally drop old tables on A.

6 Horizontal shading

When you want to distribute data of one table into more physic servers.
a. Estimate how many is needed, find a proper shading key. You should visit only one instance after shading.
b. A good shading number may be 10 or 100. It is human friendly when debug.
c. You don’t need to have 100 physic servers to deploy all tables, DBPool have the ability of route.
d. Use the same method to move tables to new master as showed in (5)

7 Change of data structures

a. We do add column only, no drop column.
b. Application level compatible is required. Make sure new code is working with both old and new data. (if impossible, see (8))
c. Make the changes
d. Update the application to use new column for new feature.

8 Data migration

This situation always involves a big change to the logic, you need to redesign the structure
a. Create a new master (B) of tables using (5)
b. Create new data structures on (B).
c. Create a trigger to update new structure when old data changed on (B).
d. Migrate your old data into the new structure, pay attention to (c) have already moved some recent data.
e. Create a new abstract instance in DBPool, for new structures.
f. Update the application use the new structure for reading.
g. In the same time, old client and new client have the same data for we have (c).
h. Update the application use the new structure for writing.
i. Stop the replication (a), trigger (c) and drop the old tables

9 Planned maintenance

a. DBPool is enough to move MySQL slave servers.
b. Use (5) to make a new master or promote one slave to master.

Published
Categorized as 技术

Free Website Hosting (PaaS)

自从AppFog的免费主机取消了,剩下免费账号也越来越慢。对比了各种云主机,OpenShift是目前最好的选择了。
Update 2015-Mar-06: Appfog的环境里有一个旧版的DBCP包,非常头疼。

OpenShift的免费账号功能亮点很多:

  • 支持3个512M节点
    • 可以建立2个App,1个MySQL;
    • 两个App可以AutoScaleUp
  • 可以ssh登录;
  • 负载均衡采用HAProxy,不计费;
  • 支持自定义域名;
  • 默认域名rhcloud.com支持SSL;
  • 填个信用卡注册了Bronze账号,控制好用量,也可以免费,但是可以支持自定义域名的SSL了。

试用了一下,性能比AppFog高不少,对个人网站来说,是奢侈配置。做开发的功能测试用,也很方便。

OpenSSL and cURL for iOS

昨天升级iOS程序,顺便升级了依赖到的两个库,OpenSSL和cURL。
升级了版本,增加了新的iPhone5s的64位CPU的支持。

I have updated the dependency libraries of my iPhone app, OpenSSL and cURL.
Added support of iPhone5s new 64bit arm64 CPU.
Upgraded to latest version.

Approach

交叉编译这两个库的关键是两个参数:-isysroot和-miphoneos-version-min
cURL是拼上不同的参数实现的,OpenSSL已经内建有iphoneos-cross支持,修改一些参数来支持arm64和模拟器。

The key of cross compiling are two parameters: -isysroot and -miphoneos-version-min
cURL is configured using parameters.
While OpenSSL has a build-in target called iphoneos-cross, I added 64 bit support based on it.

Code

这两个项目的代码提交在了GitHub,目前用iOS SDK7.1在MacOSX 10.8验证通过.
Here are two projects on GitHub, tested on iOS SDK 7.1 and MacOSX 10.8 .
https://github.com/sinofool/build-openssl-ios
https://github.com/sinofool/build-libcurl-ios

Usage

这两个脚本不需要git clone再使用,下载好OpenSSL和cURL的源代码并解压缩,直接运行github上的脚本,就会编译好放在桌面上。
It is not necessary clone the code locally, download the sources from OpenSSL and cURL official website.
Run following scripts, results will be on the desktop.

curl -O http://www.openssl.org/source/openssl-1.0.1f.tar.gz
tar xf openssl-1.0.1f.tar.gz
cd openssl-1.0.1f
curl https://raw.githubusercontent.com/sinofool/build-openssl-ios/master/build_openssl_dist.sh |bash

cURL也是一样:

curl -O http://curl.haxx.se/download/curl-7.35.0.tar.gz
tar xf curl-7.35.0.tar.gz
cd curl-7.35.0
curl https://raw.githubusercontent.com/sinofool/build-libcurl-ios/master/build_libcurl_dist.sh |bash

Build Google protobuf 2.4.1 for iOS development

Although iOS5 shipped with a private version of protobuf but it is too old for me.
Here is the script I used to build protobuf.
1. Download the newest protobuf(2.4.1 at the moment) and unpack.
2. Run this in the unpacked directory. It will create a folder named “protobuf_dist” on desktop.
3. Copy or add protobuf_dist to xcode project. That’s all.
#!/bin/bash

TMP_DIR=/tmp/protobuf_$$

###################################################
# Build i386 version first,
# Because arm needs it binary.
###################################################

CFLAGS=-m32 CPPFLAGS=-m32 CXXFLAGS=-m32 LDFLAGS=-m32 ./configure --prefix=${TMP_DIR}/i386 \
--disable-shared \
--enable-static || exit 1
make clean || exit 2
make -j8 || exit 3
make install || exit 4

###################################################
# Build armv7 version,
###################################################

SDKROOT=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS5.1.sdk
DEVROOT=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer

export CC=${DEVROOT}/usr/bin/llvm-gcc
export CFLAGS="-arch armv7 -isysroot $SDKROOT"

export CXX=${DEVROOT}/usr/bin/llvm-g++
export CXXFLAGS="$CFLAGS"
export LDFLAGS="-isysroot $SDKROOT -Wl,-syslibroot $SDKROOT"

./configure --prefix=$TMP_DIR/armv7 \
--with-protoc=${TMP_DIR}/i386/bin/protoc \
--disable-shared \
--enable-static \
-host=arm-apple-darwin10 || exit 1
make clean || exit 2
make -j8 || exit 3
make install || exit 4

###################################################
# Packing
###################################################

DIST_DIR=$HOME/Desktop/protobuf_dist
rm -rf ${DIST_DIR}
mkdir -p ${DIST_DIR}
mkdir ${DIST_DIR}/{bin,lib}
cp -r ${TMP_DIR}/armv7/include ${DIST_DIR}/
cp ${TMP_DIR}/i386/bin/protoc ${DIST_DIR}/bin/
lipo -arch i386 ${TMP_DIR}/i386/lib/libprotobuf.a -arch armv7 ${TMP_DIR}/armv7/lib/libprotobuf.a -output ${DIST_DIR}/lib/libprotobuf.a -create

This is tested on OSX Lion with Xcode 4.2.1

Published
Categorized as iOS, 技术

使用Hive做数据分析

在大规模推广streaming方式的数据分析后,我们发现这个模式虽然入门成本低,但是执行效率也一样低。
每一个map task都要在TaskTracker上启动两个进程,一个java和一个perl/bash/python。
输入输出都多复制一次。

经过了一系列调研后,我们开始将部分streaming任务改写为Hive。

Hive是什么?

  1. Hive是单机运行的SQL解析引擎,本身并不运行在Hadoop上。
  2. SQL经过Hive解析为MapReduce任务,在Hadoop上运行。
  3. 使用Hive可以降低沟通成本,因为SQL语法的普及度较高。
  4. Hive翻译的任务效率不错,但是依然不如优化过的纯MapReduce任务。

数据准备

原始日志文件是这样的:
1323431269786 202911262 RE_223500512 AT_BLOG_788514510 REPLY BLOG_788514510_202911262

分别对应的字段是 <时间> <操作人> [[说明] [说明]……] <操作> <实体>
上面的例子对应的含义是:
  • <时间>: 1323431269786
  • <操作人>: 202911262
  • [说明]: RE_223500512
  • [说明]: AT_BLOG_788514510
  • <操作>: REPLY
  • <实体>: BLOG_788514510_202911262

扩展Hive的Deserializer

要用SQL分析数据,Hive必须知道如何切分整行的日志。Hive提供了一个接口,留给我们扩展自己的序列化和反序列化方法。

import java.util.Properties;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hive.serde2.Deserializer;
import org.apache.hadoop.hive.serde2.SerDeException;
import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
import org.apache.hadoop.io.Writable;

public class RawActionDeserializer implements Deserializer {

@Override
public Object deserialize(Writable obj) throws SerDeException {
// TODO Auto-generated method stub
return null;
}

@Override
public ObjectInspector getObjectInspector() throws SerDeException {
// TODO Auto-generated method stub
return null;
}

@Override
public void initialize(Configuration conf, Properties props)
throws SerDeException {
// TODO Auto-generated method stub

}

}

三个函数作用分别是:

  • initialize:在启动时调用,根据运行时参数调整行为或者分配资源。
  • getObjectInspector:返回字段定义名称和类型。
  • deserialize:对每一行数据进行反序列化,返回结果。

定义表结构

在我们这个例子中,字段是固定的含义,不需要在initialize方法配置运行期参数。我们把字段的定义写成static,如下。

private static List structFieldNames = new ArrayList();

private static List structFieldObjectInspectors = new ArrayList();
static {
structFieldNames.add("time");
structFieldObjectInspectors.add(ObjectInspectorFactory
.getReflectionObjectInspector(Long.TYPE, ObjectInspectorOptions.JAVA));

structFieldNames.add("id");
structFieldObjectInspectors.add(ObjectInspectorFactory
.getReflectionObjectInspector(
java.lang.Integer.TYPE, ObjectInspectorOptions.JAVA));

structFieldNames.add("adv");
structFieldObjectInspectors.add(ObjectInspectorFactory
.getStandardListObjectInspector(
ObjectInspectorFactory.getReflectionObjectInspector(
String.class, ObjectInspectorOptions.JAVA)));

structFieldNames.add("verb");
structFieldObjectInspectors
.add(ObjectInspectorFactory.getReflectionObjectInspector(
String.class, ObjectInspectorOptions.JAVA));

structFieldNames.add("obj");
structFieldObjectInspectors
.add(ObjectInspectorFactory.getReflectionObjectInspector(
String.class, ObjectInspectorOptions.JAVA));
}

@Override
public ObjectInspector getObjectInspector() throws SerDeException {
return ObjectInspectorFactory.getStandardStructObjectInspector(
structFieldNames, structFieldObjectInspectors);
}

定义解析函数

为了能够让Java MapReduce任务复用代码,我们在外部实现了一个与Hive无关的类,这里不再贴代码。这个类定义了与日志字段相同的成员变量,并且提供一个static的valueOf方法用于从字符串构造自己。

@Override
public Object deserialize(Writable blob) throws SerDeException {
if (blob instanceof Text) {
String line = ((Text) blob).toString();
RawAction act = RawAction.valueOf(line);
List result = new ArrayList();
if (act == null)
return null;
result.add(act.getTime());
result.add(act.getUserId());
result.add(act.getAdv());
result.add(act.getVerb());
result.add(act.getObj());
return result;
}
return null;
}

建表

把上面程序编译并传到hive部署目录后,进入hive:

$ ./hive --auxpath /home/bochun.bai/dp-base-1.0-SNAPSHOT.jar


hive> CREATE TABLE ac_raw ROW FORMAT SERDE 'com.renren.dp.hive.RawActionDeserializer';
OK
Time taken: 0.117 seconds
hive> DESC ac_raw;
OK
time bigint from deserializer
id int from deserializer
adv array from deserializer
verb string from deserializer
obj string from deserializer
Time taken: 0.145 seconds


hive> LOAD DATA INPATH '/user/bochun.bai/hivedemo/raw_action' OVERWRITE INTO TABLE ac_raw;
Loading data to table default.ac_raw
Deleted hdfs://NAMENODE/user/bochun.bai/warehouse/ac_raw
OK
Time taken: 0.173 seconds


hive> SELECT count(1) FROM ac_raw;
…...显示很多MapReduce进度之后......
OK
332
Time taken: 15.404 seconds


hive> SELECT count(1) as cnt, verb FROM ac_raw GROUP BY verb;
…...显示很多MapReduce进度之后......
OK
4 ADD_FOOTPRINT
1 REPLY
24 SHARE_BLOG
299 VISIT
4 add_like
Time taken: 15.242 seconds

技术和故障

这是写给内部同事的日志,也有部分概念有通用意义,就不加密了。

做技术开发的人知道,只要写代码就一定会出错,叫bug。
有的错误在上线之前没有检查出来,直到被用户使用了我们才知道。这种bug就是故障。
出现故障是再正常不过的事了,我认为处理过程,可以让故障成为宝贵的财富。

我设计的故障处理流程,分为三个阶段:反馈、处理、反思。
1 反馈阶段
说实话,大部分的故障是类似的。反馈阶段就是要把各种用户描述归一成为同一个技术问题。
这个阶段具体还要分成“用户反馈”,“已沟通”,“技术已确认”,三个状态。分别由“客服”和“技术经理”操作。
2 处理阶段
这个阶段就是写代码的阶段,把造成故障的bug修复。技术都懂得怎么做。
这个阶段具体还要分成“已分派”和“线上已修复”两个状态。这两个状态的执行者分别是“技术经理”和“测试经理”。
换句话说,一线的工程师编码的工作,是在“已分派”状态进行的。然后交给OP和QA。
3 反思阶段
这个阶段是最核心,同时是最欠缺的阶段。
这个阶段具体分成“已确认解决方案”和“已完成解决方案”两个状态。
所谓“解决方案”,一定是切中要害的解决,并且可以保证类似问题可以避免再次发生。
“解决方案”和“修复BUG”的区别在于有没有反思发生的根源。

整个流程的设计,核心目的是自我进化。只要持续犯错,再避免重复犯错,最终一定是伟大的团队。

让Google Chrome支持Kerberos认证

在启动chrome的时候增加选项:
–auth-server-whitelist=”domain.example.com”
就可以在这个域名下使用Kerberos认证了。
MacOS和Linux都能和命令行的kinit共享正确的权限,Windows的不行因为没有内建的支持。

我目前还是用命令行启动加这个选项,好像没有地方保存这个设置。

Published
Categorized as Mac

[Updated] Hive for Hadoop 0.21.0

终于要升级0.21.0了,目前最难的是Hive-0.7.0与Hadoop-0.21兼容问题。

这里有一个我修改过的Hive,可以在0.21运行:
http://sinofool.com/hive-0.7.0-r1043843.tar.bz2
MD5SUM: 8adb62c176b203b9d3cf5edc5d37b375
代码在:HIVE-1612

P.S. 这个版本有点儿旧,修改的时间是2010年11月10日,比官方的0.7.0落后一些。

UPDATED:

最新的2011年5月20日代码,还是用上面的patch编译的,基本可用。

去掉了HBase的支持,因为HBase本身也还不支持0.21.0。接下来解决HBase的兼容问题,再去jira里面提交。

hive-0.8.0-SNAPSHOT-r1125002-bin.tar.gz
MD5SUM: 7a48b50d375aae5ee69cd42dbd7bdd16

UPDATED:
最新的2011年5月23日代码,同上。
hive-0.8.0-SNAPSHOT-r1127826-bin.tar.gz
MD5SUM: 9dd4cb9d850894353a18df399b8c7b53

Exit mobile version