36 Commits

Author SHA1 Message Date
Toby
7a52228ec6 docs: add a section about openwrt (#53) 2024-02-11 13:12:49 -08:00
Toby
6d33a0d51c fix: incorrect verdict handling that caused packets to pass through even after they had been blocked (#52) 2024-02-11 13:05:05 -08:00
Toby
27c9b91a61 feat: nftables support (#50)
* feat: nftables support

* fix: format
2024-02-11 13:04:49 -08:00
TAKAHASHI Shuuji
36bb4b796d docs: update README.ja.md (#49) 2024-02-05 20:44:56 -08:00
Toby
843f17896c fix: netlink race condition (#48) 2024-02-05 19:32:52 -08:00
Toby
6871244809 chore: improve built-in funcs handling (#43) 2024-02-04 11:17:19 -08:00
Toby
f8f0153664 feat: rules hot reload via SIGHUP (#44) 2024-02-03 10:55:20 -08:00
Haruue
8d94400855 Add WireGuard analyzer (#41)
* feat: add WireGuard analyzer

* chore(wg): reduce map creating for non wg packets

* chore: import format

* docs: add wg usage

---------

Co-authored-by: Toby <tobyxdd@gmail.com>
2024-01-30 18:05:51 -08:00
Rinka
f07a38bc47 Add GeoIP and GeoSite support for expr (#38)
* feat: copy something from hysteria/extras/outbounds/acl

* feat: add geoip and geosite support for expr

* refactor: geo matcher

* fix: typo

* refactor: geo matcher

* feat: expose config options to specify local geoip/geosite db files

* refactor: engine.Config should not contains geo

* feat: make geosite and geoip lazy downloaded

* chore: minor code improvement

* docs: add geoip/geosite usage

---------

Co-authored-by: Toby <tobyxdd@gmail.com>
2024-01-30 17:30:35 -08:00
Toby
e23f8e06a2 docs: add socks4 2024-01-27 13:58:35 -08:00
Toby
3367cccf8c docs: update socks rules 2024-01-27 13:56:08 -08:00
Toby
e6e9656ec6 Merge pull request #31 from eltociear/add_ja-readme
docs: add Japanese README
2024-01-27 13:49:06 -08:00
Toby
73d78489b5 Merge pull request #35 from KujouRinka/master
Add Socks4/4a Analyzer
2024-01-27 13:47:50 -08:00
Toby
63510eda5e chore: minor doc fix 2024-01-27 13:40:29 -08:00
Toby
a2475d3722 fix: remove "reject with tcp reset" for now as it doesn't work properly 2024-01-27 13:27:27 -08:00
KujouRinka
bd724f43c0 docs: update socks doc 2024-01-27 21:01:40 +08:00
KujouRinka
ff27ee512a refactor: merge sock4 and socks5 into one 2024-01-27 20:45:11 +08:00
KujouRinka
1ae0455fd5 docs: add sock4/4a doc 2024-01-27 14:09:21 +08:00
KujouRinka
96716561e0 feat: add sock4/4a analyzer 2024-01-27 14:05:28 +08:00
KujouRinka
ddfb2ce2af style: rename vars to avoid namespace pollution 2024-01-27 11:17:39 +08:00
Toby
90542be7f2 docs: add SOCKS5 2024-01-26 14:03:22 -08:00
Toby
fe2ff6aa69 Merge pull request #34 from KujouRinka/master
Add Socks5 Analyzer
2024-01-26 13:59:26 -08:00
Toby
bd92e716ce docs: improve grammar 2024-01-26 13:57:15 -08:00
Toby
e0712f1d51 fix: import format 2024-01-26 13:54:34 -08:00
Toby
4581d0babe Merge branch 'master' of https://github.com/KujouRinka/OpenGFW into feat-socks5 2024-01-26 13:37:41 -08:00
KujouRinka
b0106c9941 docs: add socks5 doc 2024-01-26 14:45:26 +08:00
Toby
eeb234552c Merge pull request #33 from Fangliding/master
Add ECH in doc
2024-01-25 22:00:30 -08:00
风扇滑翔翼
cbbca0353e Add ECH in doc 2024-01-26 12:54:20 +08:00
KujouRinka
f004d17522 feat: set Limit for socks5 analyzer 2024-01-26 11:31:11 +08:00
KujouRinka
d2d4fa723a feat: socks5 analyzer 2024-01-26 11:24:36 +08:00
Toby
d7d3437d3c docs: analyzers 2024-01-24 20:01:53 -08:00
Ikko Eltociear Ashimine
ce9f0145da docs: add Japanese README 2024-01-25 13:01:43 +09:00
Toby
7441d24aea docs: trivial updates 2024-01-21 16:11:01 -08:00
Toby
99403d3150 docs: logo transparency 2024-01-21 15:33:18 -08:00
Toby
1041d5fde1 feat: Trojan analyzer based on github.com/XTLS/Trojan-killer 2024-01-21 14:48:54 -08:00
Toby
00d88d7fbf fix: incorrect prop update logic 2024-01-21 12:26:23 -08:00
27 changed files with 3151 additions and 84 deletions

134
README.ja.md Normal file
View File

@@ -0,0 +1,134 @@
# ![OpenGFW](docs/logo.png)
[![License][1]][2]
[1]: https://img.shields.io/badge/License-MPL_2.0-brightgreen.svg
[2]: LICENSE
OpenGFW は、Linux 上の [GFW](https://en.wikipedia.org/wiki/Great_Firewall) の柔軟で使いやすいオープンソース実装であり、多くの点で本物より強力です。これは家庭用ルーターでできるサイバー主権です。
> [!CAUTION]
> このプロジェクトはまだ開発の初期段階です。使用は自己責任でお願いします。
> [!NOTE]
> 私たちはこのプロジェクト、特により多くのプロトコル用のアナライザーの実装を手伝ってくれるコントリビューターを探しています!!!
## 特徴
- フル IP/TCP 再アセンブル、各種プロトコルアナライザー
- HTTP、TLS、DNS、SSH、SOCKS4/5、WireGuard、その他多数
- Shadowsocks の「完全に暗号化されたトラフィック」の検出など (https://gfw.report/publications/usenixsecurity23/data/paper/paper.pdf)
- トロイの木馬キラー (https://github.com/XTLS/Trojan-killer) に基づくトロイの木馬 (プロキシプロトコル) 検出
- [WIP] 機械学習に基づくトラフィック分類
- IPv4 と IPv6 をフルサポート
- フローベースのマルチコア負荷分散
- 接続オフロード
- [expr](https://github.com/expr-lang/expr) に基づく強力なルールエンジン
- ルールのホットリロード (`SIGHUP` を送信してリロード)
- 柔軟なアナライザ&モディファイアフレームワーク
- 拡張可能な IO 実装 (今のところ NFQueue のみ)
- [WIP] ウェブ UI
## ユースケース
- 広告ブロック
- ペアレンタルコントロール
- マルウェア対策
- VPN/プロキシサービスの不正利用防止
- トラフィック分析(ログのみモード)
## 使用方法
### ビルド
```shell
go build
```
### 実行
```shell
export OPENGFW_LOG_LEVEL=debug
./OpenGFW -c config.yaml rules.yaml
```
#### OpenWrt
OpenGFW は OpenWrt 23.05 で動作することがテストされています(他のバージョンも動作するはずですが、検証されていません)。
依存関係をインストールしてください:
```shell
opkg install kmod-nft-queue kmod-nf-conntrack-netlink
```
### 設定例
```yaml
io:
queueSize: 1024
local: true # FORWARD チェーンで OpenGFW を実行したい場合は false に設定する
workers:
count: 4
queueSize: 16
tcpMaxBufferedPagesTotal: 4096
tcpMaxBufferedPagesPerConn: 64
udpMaxStreams: 4096
```
### ルール例
[アナライザーのプロパティ](docs/Analyzers.md)
式言語の構文については、[Expr 言語定義](https://expr-lang.org/docs/language-definition)を参照してください。
```yaml
- name: block v2ex http
action: block
expr: string(http?.req?.headers?.host) endsWith "v2ex.com"
- name: block v2ex https
action: block
expr: string(tls?.req?.sni) endsWith "v2ex.com"
- name: block shadowsocks
action: block
expr: fet != nil && fet.yes
- name: block trojan
action: block
expr: trojan != nil && trojan.yes
- name: v2ex dns poisoning
action: modify
modifier:
name: dns
args:
a: "0.0.0.0"
aaaa: "::"
expr: dns != nil && dns.qr && any(dns.questions, {.name endsWith "v2ex.com"})
- name: block google socks
action: block
expr: string(socks?.req?.addr) endsWith "google.com" && socks?.req?.port == 80
- name: block wireguard by handshake response
action: drop
expr: wireguard?.handshake_response?.receiver_index_matched == true
- name: block bilibili geosite
action: block
expr: geosite(string(tls?.req?.sni), "bilibili")
- name: block CN geoip
action: block
expr: geoip(string(ip.dst), "cn")
```
#### サポートされるアクション
- `allow`: 接続を許可し、それ以上の処理は行わない。
- `block`: 接続をブロックし、それ以上の処理は行わない。
- `drop`: UDP の場合、ルールのトリガーとなったパケットをドロップし、同じフローに含まれる以降のパケットの処理を継続する。TCP の場合は、`block` と同じ。
- `modify`: UDP の場合、与えられた修飾子を使って、ルールをトリガしたパケットを修正し、同じフロー内の今後のパケットを処理し続ける。TCP の場合は、`allow` と同じ。

View File

@@ -3,10 +3,10 @@
[![License][1]][2]
[1]: https://img.shields.io/badge/License-MPL_2.0-brightgreen.svg
[2]: LICENSE
**[中文文档](README.zh.md)**
**[日本語ドキュメント](README.ja.md)**
OpenGFW is a flexible, easy-to-use, open source implementation of [GFW](https://en.wikipedia.org/wiki/Great_Firewall) on
Linux that's in many ways more powerful than the real thing. It's cyber sovereignty you can have on a home router.
@@ -20,13 +20,16 @@ Linux that's in many ways more powerful than the real thing. It's cyber sovereig
## Features
- Full IP/TCP reassembly, various protocol analyzers
- HTTP, TLS, DNS, SSH, and many more to come
- "Fully encrypted traffic" detection for Shadowsocks,
etc. (https://gfw.report/publications/usenixsecurity23/data/paper/paper.pdf)
- [WIP] Machine learning based traffic classification
- HTTP, TLS, DNS, SSH, SOCKS4/5, WireGuard, and many more to come
- "Fully encrypted traffic" detection for Shadowsocks,
etc. (https://gfw.report/publications/usenixsecurity23/data/paper/paper.pdf)
- Trojan (proxy protocol) detection based on Trojan-killer (https://github.com/XTLS/Trojan-killer)
- [WIP] Machine learning based traffic classification
- Full IPv4 and IPv6 support
- Flow-based multicore load balancing
- Connection offloading
- Powerful rule engine based on [expr](https://github.com/expr-lang/expr)
- Hot-reloadable rules (send `SIGHUP` to reload)
- Flexible analyzer & modifier framework
- Extensible IO implementation (only NFQueue for now)
- [WIP] Web UI
@@ -54,6 +57,16 @@ export OPENGFW_LOG_LEVEL=debug
./OpenGFW -c config.yaml rules.yaml
```
#### OpenWrt
OpenGFW has been tested to work on OpenWrt 23.05 (other versions should also work, just not verified).
Install the dependencies:
```shell
opkg install kmod-nft-queue kmod-nf-conntrack-netlink
```
### Example config
```yaml
@@ -71,8 +84,7 @@ workers:
### Example rules
Documentation on all supported protocols and what field each one has is not yet ready. For now, you have to check the
code under "analyzer" directory directly.
[Analyzer properties](docs/Analyzers.md)
For syntax of the expression language, please refer
to [Expr Language Definition](https://expr-lang.org/docs/language-definition).
@@ -90,6 +102,10 @@ to [Expr Language Definition](https://expr-lang.org/docs/language-definition).
action: block
expr: fet != nil && fet.yes
- name: block trojan
action: block
expr: trojan != nil && trojan.yes
- name: v2ex dns poisoning
action: modify
modifier:
@@ -98,13 +114,29 @@ to [Expr Language Definition](https://expr-lang.org/docs/language-definition).
a: "0.0.0.0"
aaaa: "::"
expr: dns != nil && dns.qr && any(dns.questions, {.name endsWith "v2ex.com"})
- name: block google socks
action: block
expr: string(socks?.req?.addr) endsWith "google.com" && socks?.req?.port == 80
- name: block wireguard by handshake response
action: drop
expr: wireguard?.handshake_response?.receiver_index_matched == true
- name: block bilibili geosite
action: block
expr: geosite(string(tls?.req?.sni), "bilibili")
- name: block CN geoip
action: block
expr: geoip(string(ip.dst), "cn")
```
#### Supported actions
- `allow`: Allow the connection, no further processing.
- `block`: Block the connection, no further processing. Send a TCP RST if it's a TCP connection.
- `block`: Block the connection, no further processing.
- `drop`: For UDP, drop the packet that triggered the rule, continue processing future packets in the same flow. For
TCP, same as `block`.
- `modify`: For UDP, modify the packet that triggered the rule using the given modifier, continue processing future
packets in the same flow. For TCP, same as `allow`.
packets in the same flow. For TCP, same as `allow`.

View File

@@ -3,7 +3,6 @@
[![License][1]][2]
[1]: https://img.shields.io/badge/License-MPL_2.0-brightgreen.svg
[2]: LICENSE
OpenGFW 是一个 Linux 上灵活、易用、开源的 [GFW](https://zh.wikipedia.org/wiki/%E9%98%B2%E7%81%AB%E9%95%BF%E5%9F%8E)
@@ -18,12 +17,15 @@ OpenGFW 是一个 Linux 上灵活、易用、开源的 [GFW](https://zh.wikipedi
## 功能
- 完整的 IP/TCP 重组,各种协议解析器
- HTTP, TLS, DNS, SSH, 更多协议正在开发中
- Shadowsocks 等 "全加密流量" 检测 (https://gfw.report/publications/usenixsecurity23/data/paper/paper.pdf)
- [开发中] 基于机器学习的流量分类
- HTTP, TLS, DNS, SSH, SOCKS4/5, WireGuard, 更多协议正在开发中
- Shadowsocks 等 "全加密流量" 检测 (https://gfw.report/publications/usenixsecurity23/data/paper/paper.pdf)
- 基于 Trojan-killer 的 Trojan 检测 (https://github.com/XTLS/Trojan-killer)
- [开发中] 基于机器学习的流量分类
- 同等支持 IPv4 和 IPv6
- 基于流的多核负载均衡
- 连接 offloading
- 基于 [expr](https://github.com/expr-lang/expr) 的强大规则引擎
- 规则可以热重载 (发送 `SIGHUP` 信号)
- 灵活的协议解析和修改框架
- 可扩展的 IO 实现 (目前只有 NFQueue)
- [开发中] Web UI
@@ -51,6 +53,16 @@ export OPENGFW_LOG_LEVEL=debug
./OpenGFW -c config.yaml rules.yaml
```
#### OpenWrt
OpenGFW 在 OpenWrt 23.05 上测试可用(其他版本应该也可以,暂时未经验证)。
安装依赖:
```shell
opkg install kmod-nft-queue kmod-nf-conntrack-netlink
```
### 样例配置
```yaml
@@ -68,7 +80,7 @@ workers:
### 样例规则
关于规则具体支持哪些协议,以及每个协议包含哪些字段的文档还没有写。目前请直接参考 "analyzer" 目录下的代码。
[解析器属性](docs/Analyzers.md)
规则的语法请参考 [Expr Language Definition](https://expr-lang.org/docs/language-definition)。
@@ -85,6 +97,10 @@ workers:
action: block
expr: fet != nil && fet.yes
- name: block trojan
action: block
expr: trojan != nil && trojan.yes
- name: v2ex dns poisoning
action: modify
modifier:
@@ -93,11 +109,27 @@ workers:
a: "0.0.0.0"
aaaa: "::"
expr: dns != nil && dns.qr && any(dns.questions, {.name endsWith "v2ex.com"})
- name: block google socks
action: block
expr: string(socks?.req?.addr) endsWith "google.com" && socks?.req?.port == 80
- name: block wireguard by handshake response
action: drop
expr: wireguard?.handshake_response?.receiver_index_matched == true
- name: block bilibili geosite
action: block
expr: geosite(string(tls?.req?.sni), "bilibili")
- name: block CN geoip
action: block
expr: geoip(string(ip.dst), "cn")
```
#### 支持的 action
- `allow`: 放行连接,不再处理后续的包。
- `block`: 阻断连接,不再处理后续的包。如果是 TCP 连接,会发送 RST 包。
- `block`: 阻断连接,不再处理后续的包。
- `drop`: 对于 UDP丢弃触发规则的包但继续处理同一流中的后续包。对于 TCP效果同 `block`
- `modify`: 对于 UDP用指定的修改器修改触发规则的包然后继续处理同一流中的后续包。对于 TCP效果同 `allow`
- `modify`: 对于 UDP用指定的修改器修改触发规则的包然后继续处理同一流中的后续包。对于 TCP效果同 `allow`

508
analyzer/tcp/socks.go Normal file
View File

@@ -0,0 +1,508 @@
package tcp
import (
"net"
"github.com/apernet/OpenGFW/analyzer"
"github.com/apernet/OpenGFW/analyzer/utils"
)
const (
SocksInvalid = iota
Socks4
Socks4A
Socks5
Socks4Version = 0x04
Socks5Version = 0x05
Socks4ReplyVN = 0x00
Socks4CmdTCPConnect = 0x01
Socks4CmdTCPBind = 0x02
Socks4ReqGranted = 0x5A
Socks4ReqRejectOrFailed = 0x5B
Socks4ReqRejectIdentd = 0x5C
Socks4ReqRejectUser = 0x5D
Socks5CmdTCPConnect = 0x01
Socks5CmdTCPBind = 0x02
Socks5CmdUDPAssociate = 0x03
Socks5AuthNotRequired = 0x00
Socks5AuthPassword = 0x02
Socks5AuthNoMatchingMethod = 0xFF
Socks5AuthSuccess = 0x00
Socks5AuthFailure = 0x01
Socks5AddrTypeIPv4 = 0x01
Socks5AddrTypeDomain = 0x03
Socks5AddrTypeIPv6 = 0x04
)
var _ analyzer.Analyzer = (*SocksAnalyzer)(nil)
type SocksAnalyzer struct{}
func (a *SocksAnalyzer) Name() string {
return "socks"
}
func (a *SocksAnalyzer) Limit() int {
// Socks4 length limit cannot be predicted
return 0
}
func (a *SocksAnalyzer) NewTCP(info analyzer.TCPInfo, logger analyzer.Logger) analyzer.TCPStream {
return newSocksStream(logger)
}
type socksStream struct {
logger analyzer.Logger
reqBuf *utils.ByteBuffer
reqMap analyzer.PropMap
reqUpdated bool
reqLSM *utils.LinearStateMachine
reqDone bool
respBuf *utils.ByteBuffer
respMap analyzer.PropMap
respUpdated bool
respLSM *utils.LinearStateMachine
respDone bool
version int
authReqMethod int
authUsername string
authPassword string
authRespMethod int
}
func newSocksStream(logger analyzer.Logger) *socksStream {
s := &socksStream{logger: logger, reqBuf: &utils.ByteBuffer{}, respBuf: &utils.ByteBuffer{}}
s.reqLSM = utils.NewLinearStateMachine(
s.parseSocksReqVersion,
)
s.respLSM = utils.NewLinearStateMachine(
s.parseSocksRespVersion,
)
return s
}
func (s *socksStream) Feed(rev, start, end bool, skip int, data []byte) (u *analyzer.PropUpdate, d bool) {
if skip != 0 {
return nil, true
}
if len(data) == 0 {
return nil, false
}
var update *analyzer.PropUpdate
var cancelled bool
if rev {
s.respBuf.Append(data)
s.respUpdated = false
cancelled, s.respDone = s.respLSM.Run()
if s.respUpdated {
update = &analyzer.PropUpdate{
Type: analyzer.PropUpdateMerge,
M: analyzer.PropMap{"resp": s.respMap},
}
s.respUpdated = false
}
} else {
s.reqBuf.Append(data)
s.reqUpdated = false
cancelled, s.reqDone = s.reqLSM.Run()
if s.reqUpdated {
update = &analyzer.PropUpdate{
Type: analyzer.PropUpdateMerge,
M: analyzer.PropMap{
"version": s.socksVersion(),
"req": s.reqMap,
},
}
s.reqUpdated = false
}
}
return update, cancelled || (s.reqDone && s.respDone)
}
func (s *socksStream) Close(limited bool) *analyzer.PropUpdate {
s.reqBuf.Reset()
s.respBuf.Reset()
s.reqMap = nil
s.respMap = nil
return nil
}
func (s *socksStream) parseSocksReqVersion() utils.LSMAction {
socksVer, ok := s.reqBuf.GetByte(true)
if !ok {
return utils.LSMActionPause
}
if socksVer != Socks4Version && socksVer != Socks5Version {
return utils.LSMActionCancel
}
s.reqMap = make(analyzer.PropMap)
s.reqUpdated = true
if socksVer == Socks4Version {
s.version = Socks4
s.reqLSM.AppendSteps(
s.parseSocks4ReqIpAndPort,
s.parseSocks4ReqUserId,
s.parseSocks4ReqHostname,
)
} else {
s.version = Socks5
s.reqLSM.AppendSteps(
s.parseSocks5ReqMethod,
s.parseSocks5ReqAuth,
s.parseSocks5ReqConnInfo,
)
}
return utils.LSMActionNext
}
func (s *socksStream) parseSocksRespVersion() utils.LSMAction {
socksVer, ok := s.respBuf.GetByte(true)
if !ok {
return utils.LSMActionPause
}
if (s.version == Socks4 || s.version == Socks4A) && socksVer != Socks4ReplyVN ||
s.version == Socks5 && socksVer != Socks5Version || s.version == SocksInvalid {
return utils.LSMActionCancel
}
if socksVer == Socks4ReplyVN {
s.respLSM.AppendSteps(
s.parseSocks4RespPacket,
)
} else {
s.respLSM.AppendSteps(
s.parseSocks5RespMethod,
s.parseSocks5RespAuth,
s.parseSocks5RespConnInfo,
)
}
return utils.LSMActionNext
}
func (s *socksStream) parseSocks5ReqMethod() utils.LSMAction {
nMethods, ok := s.reqBuf.GetByte(false)
if !ok {
return utils.LSMActionPause
}
methods, ok := s.reqBuf.Get(int(nMethods)+1, true)
if !ok {
return utils.LSMActionPause
}
// For convenience, we only take the first method we can process
s.authReqMethod = Socks5AuthNoMatchingMethod
for _, method := range methods[1:] {
switch method {
case Socks5AuthNotRequired:
s.authReqMethod = Socks5AuthNotRequired
break
case Socks5AuthPassword:
s.authReqMethod = Socks5AuthPassword
break
default:
// TODO: more auth method to support
}
}
return utils.LSMActionNext
}
func (s *socksStream) parseSocks5ReqAuth() utils.LSMAction {
switch s.authReqMethod {
case Socks5AuthNotRequired:
s.reqMap["auth"] = analyzer.PropMap{"method": s.authReqMethod}
case Socks5AuthPassword:
meta, ok := s.reqBuf.Get(2, false)
if !ok {
return utils.LSMActionPause
}
if meta[0] != 0x01 {
return utils.LSMActionCancel
}
usernameLen := int(meta[1])
meta, ok = s.reqBuf.Get(usernameLen+3, false)
if !ok {
return utils.LSMActionPause
}
passwordLen := int(meta[usernameLen+2])
meta, ok = s.reqBuf.Get(usernameLen+passwordLen+3, true)
if !ok {
return utils.LSMActionPause
}
s.authUsername = string(meta[2 : usernameLen+2])
s.authPassword = string(meta[usernameLen+3:])
s.reqMap["auth"] = analyzer.PropMap{
"method": s.authReqMethod,
"username": s.authUsername,
"password": s.authPassword,
}
default:
return utils.LSMActionCancel
}
s.reqUpdated = true
return utils.LSMActionNext
}
func (s *socksStream) parseSocks5ReqConnInfo() utils.LSMAction {
/* preInfo struct
+----+-----+-------+------+-------------+
|VER | CMD | RSV | ATYP | DST.ADDR(1) |
+----+-----+-------+------+-------------+
*/
preInfo, ok := s.reqBuf.Get(5, false)
if !ok {
return utils.LSMActionPause
}
// verify socks version
if preInfo[0] != Socks5Version {
return utils.LSMActionCancel
}
var pktLen int
switch int(preInfo[3]) {
case Socks5AddrTypeIPv4:
pktLen = 10
case Socks5AddrTypeDomain:
domainLen := int(preInfo[4])
pktLen = 7 + domainLen
case Socks5AddrTypeIPv6:
pktLen = 22
default:
return utils.LSMActionCancel
}
pkt, ok := s.reqBuf.Get(pktLen, true)
if !ok {
return utils.LSMActionPause
}
// parse cmd
cmd := int(pkt[1])
if cmd != Socks5CmdTCPConnect && cmd != Socks5CmdTCPBind && cmd != Socks5CmdUDPAssociate {
return utils.LSMActionCancel
}
s.reqMap["cmd"] = cmd
// parse addr type
addrType := int(pkt[3])
var addr string
switch addrType {
case Socks5AddrTypeIPv4:
addr = net.IPv4(pkt[4], pkt[5], pkt[6], pkt[7]).String()
case Socks5AddrTypeDomain:
addr = string(pkt[5 : 5+pkt[4]])
case Socks5AddrTypeIPv6:
addr = net.IP(pkt[4 : 4+net.IPv6len]).String()
default:
return utils.LSMActionCancel
}
s.reqMap["addr_type"] = addrType
s.reqMap["addr"] = addr
// parse port
port := int(pkt[pktLen-2])<<8 | int(pkt[pktLen-1])
s.reqMap["port"] = port
s.reqUpdated = true
return utils.LSMActionNext
}
func (s *socksStream) parseSocks5RespMethod() utils.LSMAction {
method, ok := s.respBuf.Get(1, true)
if !ok {
return utils.LSMActionPause
}
s.authRespMethod = int(method[0])
s.respMap = make(analyzer.PropMap)
return utils.LSMActionNext
}
func (s *socksStream) parseSocks5RespAuth() utils.LSMAction {
switch s.authRespMethod {
case Socks5AuthNotRequired:
s.respMap["auth"] = analyzer.PropMap{"method": s.authRespMethod}
case Socks5AuthPassword:
authResp, ok := s.respBuf.Get(2, true)
if !ok {
return utils.LSMActionPause
}
if authResp[0] != 0x01 {
return utils.LSMActionCancel
}
authStatus := int(authResp[1])
s.respMap["auth"] = analyzer.PropMap{
"method": s.authRespMethod,
"status": authStatus,
}
default:
return utils.LSMActionCancel
}
s.respUpdated = true
return utils.LSMActionNext
}
func (s *socksStream) parseSocks5RespConnInfo() utils.LSMAction {
/* preInfo struct
+----+-----+-------+------+-------------+
|VER | REP | RSV | ATYP | BND.ADDR(1) |
+----+-----+-------+------+-------------+
*/
preInfo, ok := s.respBuf.Get(5, false)
if !ok {
return utils.LSMActionPause
}
// verify socks version
if preInfo[0] != Socks5Version {
return utils.LSMActionCancel
}
var pktLen int
switch int(preInfo[3]) {
case Socks5AddrTypeIPv4:
pktLen = 10
case Socks5AddrTypeDomain:
domainLen := int(preInfo[4])
pktLen = 7 + domainLen
case Socks5AddrTypeIPv6:
pktLen = 22
default:
return utils.LSMActionCancel
}
pkt, ok := s.respBuf.Get(pktLen, true)
if !ok {
return utils.LSMActionPause
}
// parse rep
rep := int(pkt[1])
s.respMap["rep"] = rep
// parse addr type
addrType := int(pkt[3])
var addr string
switch addrType {
case Socks5AddrTypeIPv4:
addr = net.IPv4(pkt[4], pkt[5], pkt[6], pkt[7]).String()
case Socks5AddrTypeDomain:
addr = string(pkt[5 : 5+pkt[4]])
case Socks5AddrTypeIPv6:
addr = net.IP(pkt[4 : 4+net.IPv6len]).String()
default:
return utils.LSMActionCancel
}
s.respMap["addr_type"] = addrType
s.respMap["addr"] = addr
// parse port
port := int(pkt[pktLen-2])<<8 | int(pkt[pktLen-1])
s.respMap["port"] = port
s.respUpdated = true
return utils.LSMActionNext
}
func (s *socksStream) parseSocks4ReqIpAndPort() utils.LSMAction {
/* Following field will be parsed in this state:
+-----+----------+--------+
| CMD | DST.PORT | DST.IP |
+-----+----------+--------+
*/
pkt, ok := s.reqBuf.Get(7, true)
if !ok {
return utils.LSMActionPause
}
if pkt[0] != Socks4CmdTCPConnect && pkt[0] != Socks4CmdTCPBind {
return utils.LSMActionCancel
}
dstPort := uint16(pkt[1])<<8 | uint16(pkt[2])
dstIp := net.IPv4(pkt[3], pkt[4], pkt[5], pkt[6]).String()
// Socks4a extension
if pkt[3] == 0 && pkt[4] == 0 && pkt[5] == 0 {
s.version = Socks4A
}
s.reqMap["cmd"] = pkt[0]
s.reqMap["addr"] = dstIp
s.reqMap["addr_type"] = Socks5AddrTypeIPv4
s.reqMap["port"] = dstPort
s.reqUpdated = true
return utils.LSMActionNext
}
func (s *socksStream) parseSocks4ReqUserId() utils.LSMAction {
userIdSlice, ok := s.reqBuf.GetUntil([]byte("\x00"), true, true)
if !ok {
return utils.LSMActionPause
}
userId := string(userIdSlice[:len(userIdSlice)-1])
s.reqMap["auth"] = analyzer.PropMap{
"user_id": userId,
}
s.reqUpdated = true
return utils.LSMActionNext
}
func (s *socksStream) parseSocks4ReqHostname() utils.LSMAction {
// Only Socks4a support hostname
if s.version != Socks4A {
return utils.LSMActionNext
}
hostnameSlice, ok := s.reqBuf.GetUntil([]byte("\x00"), true, true)
if !ok {
return utils.LSMActionPause
}
hostname := string(hostnameSlice[:len(hostnameSlice)-1])
s.reqMap["addr"] = hostname
s.reqMap["addr_type"] = Socks5AddrTypeDomain
s.reqUpdated = true
return utils.LSMActionNext
}
func (s *socksStream) parseSocks4RespPacket() utils.LSMAction {
pkt, ok := s.respBuf.Get(7, true)
if !ok {
return utils.LSMActionPause
}
if pkt[0] != Socks4ReqGranted &&
pkt[0] != Socks4ReqRejectOrFailed &&
pkt[0] != Socks4ReqRejectIdentd &&
pkt[0] != Socks4ReqRejectUser {
return utils.LSMActionCancel
}
dstPort := uint16(pkt[1])<<8 | uint16(pkt[2])
dstIp := net.IPv4(pkt[3], pkt[4], pkt[5], pkt[6]).String()
s.respMap = analyzer.PropMap{
"rep": pkt[0],
"addr": dstIp,
"addr_type": Socks5AddrTypeIPv4,
"port": dstPort,
}
s.respUpdated = true
return utils.LSMActionNext
}
func (s *socksStream) socksVersion() int {
switch s.version {
case Socks4, Socks4A:
return Socks4Version
case Socks5:
return Socks5Version
default:
return SocksInvalid
}
}

91
analyzer/tcp/trojan.go Normal file
View File

@@ -0,0 +1,91 @@
package tcp
import (
"bytes"
"github.com/apernet/OpenGFW/analyzer"
)
var _ analyzer.TCPAnalyzer = (*TrojanAnalyzer)(nil)
// CCS stands for "Change Cipher Spec"
var trojanCCS = []byte{20, 3, 3, 0, 1, 1}
const (
trojanUpLB = 650
trojanUpUB = 1000
trojanDownLB1 = 170
trojanDownUB1 = 180
trojanDownLB2 = 3000
trojanDownUB2 = 7500
)
// TrojanAnalyzer uses a very simple packet length based check to determine
// if a TLS connection is actually the Trojan proxy protocol.
// The algorithm is from the following project, with small modifications:
// https://github.com/XTLS/Trojan-killer
// Warning: Experimental only. This method is known to have significant false positives and false negatives.
type TrojanAnalyzer struct{}
func (a *TrojanAnalyzer) Name() string {
return "trojan"
}
func (a *TrojanAnalyzer) Limit() int {
return 16384
}
func (a *TrojanAnalyzer) NewTCP(info analyzer.TCPInfo, logger analyzer.Logger) analyzer.TCPStream {
return newTrojanStream(logger)
}
type trojanStream struct {
logger analyzer.Logger
active bool
upCount int
downCount int
}
func newTrojanStream(logger analyzer.Logger) *trojanStream {
return &trojanStream{logger: logger}
}
func (s *trojanStream) Feed(rev, start, end bool, skip int, data []byte) (u *analyzer.PropUpdate, done bool) {
if skip != 0 {
return nil, true
}
if len(data) == 0 {
return nil, false
}
if !rev && !s.active && len(data) >= 6 && bytes.Equal(data[:6], trojanCCS) {
// Client CCS encountered, start counting
s.active = true
}
if s.active {
if rev {
// Down direction
s.downCount += len(data)
} else {
// Up direction
if s.upCount >= trojanUpLB && s.upCount <= trojanUpUB &&
((s.downCount >= trojanDownLB1 && s.downCount <= trojanDownUB1) ||
(s.downCount >= trojanDownLB2 && s.downCount <= trojanDownUB2)) {
return &analyzer.PropUpdate{
Type: analyzer.PropUpdateReplace,
M: analyzer.PropMap{
"up": s.upCount,
"down": s.downCount,
"yes": true,
},
}, true
}
s.upCount += len(data)
}
}
// Give up when either direction is over the limit
return nil, s.upCount > trojanUpUB || s.downCount > trojanDownUB2
}
func (s *trojanStream) Close(limited bool) *analyzer.PropUpdate {
return nil
}

217
analyzer/udp/wireguard.go Normal file
View File

@@ -0,0 +1,217 @@
package udp
import (
"container/ring"
"encoding/binary"
"slices"
"sync"
"github.com/apernet/OpenGFW/analyzer"
)
var (
_ analyzer.UDPAnalyzer = (*WireGuardAnalyzer)(nil)
_ analyzer.UDPStream = (*wireGuardUDPStream)(nil)
)
const (
wireguardUDPInvalidCountThreshold = 4
wireguardRememberedIndexCount = 6
wireguardPropKeyMessageType = "message_type"
)
const (
wireguardTypeHandshakeInitiation = 1
wireguardTypeHandshakeResponse = 2
wireguardTypeData = 4
wireguardTypeCookieReply = 3
)
const (
wireguardSizeHandshakeInitiation = 148
wireguardSizeHandshakeResponse = 92
wireguardMinSizePacketData = 32 // 16 bytes header + 16 bytes AEAD overhead
wireguardSizePacketCookieReply = 64
)
type WireGuardAnalyzer struct{}
func (a *WireGuardAnalyzer) Name() string {
return "wireguard"
}
func (a *WireGuardAnalyzer) Limit() int {
return 0
}
func (a *WireGuardAnalyzer) NewUDP(info analyzer.UDPInfo, logger analyzer.Logger) analyzer.UDPStream {
return newWireGuardUDPStream(logger)
}
type wireGuardUDPStream struct {
logger analyzer.Logger
invalidCount int
rememberedIndexes *ring.Ring
rememberedIndexesLock sync.RWMutex
}
func newWireGuardUDPStream(logger analyzer.Logger) *wireGuardUDPStream {
return &wireGuardUDPStream{
logger: logger,
rememberedIndexes: ring.New(wireguardRememberedIndexCount),
}
}
func (s *wireGuardUDPStream) Feed(rev bool, data []byte) (u *analyzer.PropUpdate, done bool) {
m := s.parseWireGuardPacket(rev, data)
if m == nil {
s.invalidCount++
return nil, s.invalidCount >= wireguardUDPInvalidCountThreshold
}
s.invalidCount = 0 // Reset invalid count on valid WireGuard packet
messageType := m[wireguardPropKeyMessageType].(byte)
propUpdateType := analyzer.PropUpdateMerge
if messageType == wireguardTypeHandshakeInitiation {
propUpdateType = analyzer.PropUpdateReplace
}
return &analyzer.PropUpdate{
Type: propUpdateType,
M: m,
}, false
}
func (s *wireGuardUDPStream) Close(limited bool) *analyzer.PropUpdate {
return nil
}
func (s *wireGuardUDPStream) parseWireGuardPacket(rev bool, data []byte) analyzer.PropMap {
if len(data) < 4 {
return nil
}
if slices.Max(data[1:4]) != 0 {
return nil
}
messageType := data[0]
var propKey string
var propValue analyzer.PropMap
switch messageType {
case wireguardTypeHandshakeInitiation:
propKey = "handshake_initiation"
propValue = s.parseWireGuardHandshakeInitiation(rev, data)
case wireguardTypeHandshakeResponse:
propKey = "handshake_response"
propValue = s.parseWireGuardHandshakeResponse(rev, data)
case wireguardTypeData:
propKey = "packet_data"
propValue = s.parseWireGuardPacketData(rev, data)
case wireguardTypeCookieReply:
propKey = "packet_cookie_reply"
propValue = s.parseWireGuardPacketCookieReply(rev, data)
}
if propValue == nil {
return nil
}
m := make(analyzer.PropMap)
m[wireguardPropKeyMessageType] = messageType
m[propKey] = propValue
return m
}
func (s *wireGuardUDPStream) parseWireGuardHandshakeInitiation(rev bool, data []byte) analyzer.PropMap {
if len(data) != wireguardSizeHandshakeInitiation {
return nil
}
m := make(analyzer.PropMap)
senderIndex := binary.LittleEndian.Uint32(data[4:8])
m["sender_index"] = senderIndex
s.putSenderIndex(rev, senderIndex)
return m
}
func (s *wireGuardUDPStream) parseWireGuardHandshakeResponse(rev bool, data []byte) analyzer.PropMap {
if len(data) != wireguardSizeHandshakeResponse {
return nil
}
m := make(analyzer.PropMap)
senderIndex := binary.LittleEndian.Uint32(data[4:8])
m["sender_index"] = senderIndex
s.putSenderIndex(rev, senderIndex)
receiverIndex := binary.LittleEndian.Uint32(data[8:12])
m["receiver_index"] = receiverIndex
m["receiver_index_matched"] = s.matchReceiverIndex(rev, receiverIndex)
return m
}
func (s *wireGuardUDPStream) parseWireGuardPacketData(rev bool, data []byte) analyzer.PropMap {
if len(data) < wireguardMinSizePacketData {
return nil
}
if len(data)%16 != 0 {
// WireGuard zero padding the packet to make the length a multiple of 16
return nil
}
m := make(analyzer.PropMap)
receiverIndex := binary.LittleEndian.Uint32(data[4:8])
m["receiver_index"] = receiverIndex
m["receiver_index_matched"] = s.matchReceiverIndex(rev, receiverIndex)
m["counter"] = binary.LittleEndian.Uint64(data[8:16])
return m
}
func (s *wireGuardUDPStream) parseWireGuardPacketCookieReply(rev bool, data []byte) analyzer.PropMap {
if len(data) != wireguardSizePacketCookieReply {
return nil
}
m := make(analyzer.PropMap)
receiverIndex := binary.LittleEndian.Uint32(data[4:8])
m["receiver_index"] = receiverIndex
m["receiver_index_matched"] = s.matchReceiverIndex(rev, receiverIndex)
return m
}
type wireGuardIndex struct {
SenderIndex uint32
Reverse bool
}
func (s *wireGuardUDPStream) putSenderIndex(rev bool, senderIndex uint32) {
s.rememberedIndexesLock.Lock()
defer s.rememberedIndexesLock.Unlock()
s.rememberedIndexes.Value = &wireGuardIndex{
SenderIndex: senderIndex,
Reverse: rev,
}
s.rememberedIndexes = s.rememberedIndexes.Prev()
}
func (s *wireGuardUDPStream) matchReceiverIndex(rev bool, receiverIndex uint32) bool {
s.rememberedIndexesLock.RLock()
defer s.rememberedIndexesLock.RUnlock()
var found bool
ris := s.rememberedIndexes
for it := ris.Next(); it != ris; it = it.Next() {
if it.Value == nil {
break
}
wgidx := it.Value.(*wireGuardIndex)
if wgidx.Reverse == !rev && wgidx.SenderIndex == receiverIndex {
found = true
break
}
}
return found
}

View File

@@ -44,6 +44,10 @@ func (lsm *LinearStateMachine) Run() (cancelled bool, done bool) {
return false, true
}
func (lsm *LinearStateMachine) AppendSteps(steps ...func() LSMAction) {
lsm.Steps = append(lsm.Steps, steps...)
}
func (lsm *LinearStateMachine) Reset() {
lsm.index = 0
lsm.cancelled = false

View File

@@ -7,6 +7,7 @@ import (
"os/signal"
"strconv"
"strings"
"syscall"
"github.com/apernet/OpenGFW/analyzer"
"github.com/apernet/OpenGFW/analyzer/tcp"
@@ -87,9 +88,12 @@ var logFormatMap = map[string]zapcore.EncoderConfig{
var analyzers = []analyzer.Analyzer{
&tcp.FETAnalyzer{},
&tcp.HTTPAnalyzer{},
&tcp.SocksAnalyzer{},
&tcp.SSHAnalyzer{},
&tcp.TLSAnalyzer{},
&tcp.TrojanAnalyzer{},
&udp.DNSAnalyzer{},
&udp.WireGuardAnalyzer{},
}
var modifiers = []modifier.Modifier{
@@ -159,6 +163,7 @@ func initLogger() {
type cliConfig struct {
IO cliConfigIO `mapstructure:"io"`
Workers cliConfigWorkers `mapstructure:"workers"`
Ruleset cliConfigRuleset `mapstructure:"ruleset"`
}
type cliConfigIO struct {
@@ -174,6 +179,11 @@ type cliConfigWorkers struct {
UDPMaxStreams int `mapstructure:"udpMaxStreams"`
}
type cliConfigRuleset struct {
GeoIp string `mapstructure:"geoip"`
GeoSite string `mapstructure:"geosite"`
}
func (c *cliConfig) fillLogger(config *engine.Config) error {
config.Logger = &engineLogger{}
return nil
@@ -242,7 +252,11 @@ func runMain(cmd *cobra.Command, args []string) {
if err != nil {
logger.Fatal("failed to load rules", zap.Error(err))
}
rs, err := ruleset.CompileExprRules(rawRs, analyzers, modifiers)
rsConfig := &ruleset.BuiltinConfig{
GeoSiteFilename: config.Ruleset.GeoSite,
GeoIpFilename: config.Ruleset.GeoIp,
}
rs, err := ruleset.CompileExprRules(rawRs, analyzers, modifiers, rsConfig)
if err != nil {
logger.Fatal("failed to compile rules", zap.Error(err))
}
@@ -254,14 +268,42 @@ func runMain(cmd *cobra.Command, args []string) {
logger.Fatal("failed to initialize engine", zap.Error(err))
}
// Signal handling
ctx, cancelFunc := context.WithCancel(context.Background())
go func() {
sigChan := make(chan os.Signal)
signal.Notify(sigChan, os.Interrupt, os.Kill)
<-sigChan
// Graceful shutdown
shutdownChan := make(chan os.Signal)
signal.Notify(shutdownChan, os.Interrupt, os.Kill)
<-shutdownChan
logger.Info("shutting down gracefully...")
cancelFunc()
}()
go func() {
// Rule reload
reloadChan := make(chan os.Signal)
signal.Notify(reloadChan, syscall.SIGHUP)
for {
<-reloadChan
logger.Info("reloading rules")
rawRs, err := ruleset.ExprRulesFromYAML(args[0])
if err != nil {
logger.Error("failed to load rules, using old rules", zap.Error(err))
continue
}
rs, err := ruleset.CompileExprRules(rawRs, analyzers, modifiers, rsConfig)
if err != nil {
logger.Error("failed to compile rules, using old rules", zap.Error(err))
continue
}
err = en.UpdateRuleset(rs)
if err != nil {
logger.Error("failed to update ruleset", zap.Error(err))
} else {
logger.Info("rules reloaded")
}
}
}()
logger.Info("engine started")
logger.Info("engine exited", zap.Error(en.Run(ctx)))
}

422
docs/Analyzers.md Normal file
View File

@@ -0,0 +1,422 @@
# Analyzers
Analyzers are one of the main components of OpenGFW. Their job is to analyze a connection, see if it's a protocol they
support, and if so, extract information from that connection and provide properties for the rule engine to match against
user-provided rules. OpenGFW will automatically analyze which analyzers are referenced in the given rules and enable
only those that are needed.
This document lists the properties provided by each analyzer that can be used by rules.
## DNS (TCP & UDP)
For queries:
```json
{
"dns": {
"aa": false,
"id": 41953,
"opcode": 0,
"qr": false,
"questions": [
{
"class": 1,
"name": "www.google.com",
"type": 1
}
],
"ra": false,
"rcode": 0,
"rd": true,
"tc": false,
"z": 0
}
}
```
For responses:
```json
{
"dns": {
"aa": false,
"answers": [
{
"a": "142.251.32.36",
"class": 1,
"name": "www.google.com",
"ttl": 255,
"type": 1
}
],
"id": 41953,
"opcode": 0,
"qr": true,
"questions": [
{
"class": 1,
"name": "www.google.com",
"type": 1
}
],
"ra": true,
"rcode": 0,
"rd": true,
"tc": false,
"z": 0
}
}
```
Example for blocking DNS queries for `www.google.com`:
```yaml
- name: Block Google DNS
action: drop
expr: dns != nil && !dns.qr && any(dns.questions, {.name == "www.google.com"})
```
## FET (Fully Encrypted Traffic)
Check https://www.usenix.org/system/files/usenixsecurity23-wu-mingshi.pdf for more information.
```json
{
"fet": {
"ex1": 3.7560976,
"ex2": true,
"ex3": 0.9512195,
"ex4": 39,
"ex5": false,
"yes": false
}
}
```
Example for blocking fully encrypted traffic:
```yaml
- name: Block suspicious proxy traffic
action: block
expr: fet != nil && fet.yes
```
## HTTP
```json
{
"http": {
"req": {
"headers": {
"accept": "*/*",
"host": "ipinfo.io",
"user-agent": "curl/7.81.0"
},
"method": "GET",
"path": "/",
"version": "HTTP/1.1"
},
"resp": {
"headers": {
"access-control-allow-origin": "*",
"content-length": "333",
"content-type": "application/json; charset=utf-8",
"date": "Wed, 24 Jan 2024 05:41:44 GMT",
"referrer-policy": "strict-origin-when-cross-origin",
"server": "nginx/1.24.0",
"strict-transport-security": "max-age=2592000; includeSubDomains",
"via": "1.1 google",
"x-content-type-options": "nosniff",
"x-envoy-upstream-service-time": "2",
"x-frame-options": "SAMEORIGIN",
"x-xss-protection": "1; mode=block"
},
"status": 200,
"version": "HTTP/1.1"
}
}
}
```
Example for blocking HTTP requests to `ipinfo.io`:
```yaml
- name: Block ipinfo.io HTTP
action: block
expr: http != nil && http.req != nil && http.req.headers != nil && http.req.headers.host == "ipinfo.io"
```
## SSH
```json
{
"ssh": {
"server": {
"comments": "Ubuntu-3ubuntu0.6",
"protocol": "2.0",
"software": "OpenSSH_8.9p1"
},
"client": {
"comments": "IMHACKER",
"protocol": "2.0",
"software": "OpenSSH_8.9p1"
}
}
}
```
Example for blocking all SSH connections:
```yaml
- name: Block SSH
action: block
expr: ssh != nil
```
## TLS
```json
{
"tls": {
"req": {
"alpn": [
"h2",
"http/1.1"
],
"ciphers": [
4866,
4867,
4865,
49196,
49200,
159,
52393,
52392,
52394,
49195,
49199,
158,
49188,
49192,
107,
49187,
49191,
103,
49162,
49172,
57,
49161,
49171,
51,
157,
156,
61,
60,
53,
47,
255
],
"compression": "AA==",
"random": "UqfPi+EmtMgusILrKcELvVWwpOdPSM/My09nPXl84dg=",
"session": "jCTrpAzHpwrfuYdYx4FEjZwbcQxCuZ52HGIoOcbw1vA=",
"sni": "ipinfo.io",
"supported_versions": [
772,
771
],
"version": 771,
"ech": true
},
"resp": {
"cipher": 4866,
"compression": 0,
"random": "R/Cy1m9pktuBMZQIHahD8Y83UWPRf8j8luwNQep9yJI=",
"session": "jCTrpAzHpwrfuYdYx4FEjZwbcQxCuZ52HGIoOcbw1vA=",
"supported_versions": 772,
"version": 771
}
}
}
```
Example for blocking TLS connections to `ipinfo.io`:
```yaml
- name: Block ipinfo.io TLS
action: block
expr: tls != nil && tls.req != nil && tls.req.sni == "ipinfo.io"
```
## Trojan (proxy protocol)
Check https://github.com/XTLS/Trojan-killer for more information.
```json
{
"trojan": {
"down": 4712,
"up": 671,
"yes": true
}
}
```
Example for blocking Trojan connections:
```yaml
- name: Block Trojan
action: block
expr: trojan != nil && trojan.yes
```
## SOCKS
SOCKS4:
```json5
{
"socks": {
"version": 4,
"req": {
"cmd": 1,
"addr_type": 1, // same as socks5
"addr": "1.1.1.1",
// for socks4a
// "addr_type": 3,
// "addr": "google.com",
"port": 443,
"auth": {
"user_id": "user"
}
},
"resp": {
"rep": 90, // 0x5A(90) granted
"addr_type": 1,
"addr": "1.1.1.1",
"port": 443
}
}
}
```
SOCKS5 without auth:
```json5
{
"socks": {
"version": 5,
"req": {
"cmd": 1, // 0x01: connect, 0x02: bind, 0x03: udp
"addr_type": 3, // 0x01: ipv4, 0x03: domain, 0x04: ipv6
"addr": "google.com",
"port": 80,
"auth": {
"method": 0 // 0x00: no auth, 0x02: username/password
}
},
"resp": {
"rep": 0, // 0x00: success
"addr_type": 1, // 0x01: ipv4, 0x03: domain, 0x04: ipv6
"addr": "198.18.1.31",
"port": 80,
"auth": {
"method": 0 // 0x00: no auth, 0x02: username/password
}
}
}
}
```
SOCKS5 with auth:
```json5
{
"socks": {
"version": 5,
"req": {
"cmd": 1, // 0x01: connect, 0x02: bind, 0x03: udp
"addr_type": 3, // 0x01: ipv4, 0x03: domain, 0x04: ipv6
"addr": "google.com",
"port": 80,
"auth": {
"method": 2, // 0x00: no auth, 0x02: username/password
"username": "user",
"password": "pass"
}
},
"resp": {
"rep": 0, // 0x00: success
"addr_type": 1, // 0x01: ipv4, 0x03: domain, 0x04: ipv6
"addr": "198.18.1.31",
"port": 80,
"auth": {
"method": 2, // 0x00: no auth, 0x02: username/password
"status": 0 // 0x00: success, 0x01: failure
}
}
}
}
```
Example for blocking connections to `google.com:80` and user `foobar`:
```yaml
- name: Block SOCKS google.com:80
action: block
expr: string(socks?.req?.addr) endsWith "google.com" && socks?.req?.port == 80
- name: Block SOCKS user foobar
action: block
expr: socks?.req?.auth?.method == 2 && socks?.req?.auth?.username == "foobar"
```
## WireGuard
```json5
{
"wireguard": {
"message_type": 1, // 0x1: handshake_initiation, 0x2: handshake_response, 0x3: packet_cookie_reply, 0x4: packet_data
"handshake_initiation": {
"sender_index": 0x12345678
},
"handshake_response": {
"sender_index": 0x12345678,
"receiver_index": 0x87654321,
"receiver_index_matched": true
},
"packet_data": {
"receiver_index": 0x12345678,
"receiver_index_matched": true
},
"packet_cookie_reply": {
"receiver_index": 0x12345678,
"receiver_index_matched": true
}
}
}
```
Example for blocking WireGuard traffic:
```yaml
# false positive: high
- name: Block all WireGuard-like traffic
action: block
expr: wireguard != nil
# false positive: medium
- name: Block WireGuard by handshake_initiation
action: drop
expr: wireguard?.handshake_initiation != nil
# false positive: low
- name: Block WireGuard by handshake_response
action: drop
expr: wireguard?.handshake_response?.receiver_index_matched == true
# false positive: pretty low
- name: Block WireGuard by packet_data
action: block
expr: wireguard?.packet_data?.receiver_index_matched == true
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

After

Width:  |  Height:  |  Size: 43 KiB

View File

@@ -65,7 +65,7 @@ func (f *tcpStreamFactory) New(ipFlow, tcpFlow gopacket.Flow, tcp *layers.TCP, a
ctx.Verdict = tcpVerdictAcceptStream
f.Logger.TCPStreamAction(info, ruleset.ActionAllow, true)
// a tcpStream with no activeEntries is a no-op
return &tcpStream{}
return &tcpStream{finalVerdict: tcpVerdictAcceptStream}
}
// Create entries for each analyzer
entries := make([]*tcpStreamEntry, 0, len(ans))
@@ -109,6 +109,7 @@ type tcpStream struct {
ruleset ruleset.Ruleset
activeEntries []*tcpStreamEntry
doneEntries []*tcpStreamEntry
finalVerdict tcpVerdict
}
type tcpStreamEntry struct {
@@ -119,8 +120,13 @@ type tcpStreamEntry struct {
}
func (s *tcpStream) Accept(tcp *layers.TCP, ci gopacket.CaptureInfo, dir reassembly.TCPFlowDirection, nextSeq reassembly.Sequence, start *bool, ac reassembly.AssemblerContext) bool {
// Only accept packets if we still have active entries
return len(s.activeEntries) > 0
if len(s.activeEntries) > 0 {
return true
} else {
ctx := ac.(*tcpContext)
ctx.Verdict = s.finalVerdict
return false
}
}
func (s *tcpStream) ReassembledSG(sg reassembly.ScatterGather, ac reassembly.AssemblerContext) {
@@ -133,8 +139,9 @@ func (s *tcpStream) ReassembledSG(sg reassembly.ScatterGather, ac reassembly.Ass
// Important: reverse order so we can remove entries
entry := s.activeEntries[i]
update, closeUpdate, done := s.feedEntry(entry, rev, start, end, skip, data)
updated = updated || processPropUpdate(s.info.Props, entry.Name, update)
updated = updated || processPropUpdate(s.info.Props, entry.Name, closeUpdate)
up1 := processPropUpdate(s.info.Props, entry.Name, update)
up2 := processPropUpdate(s.info.Props, entry.Name, closeUpdate)
updated = updated || up1 || up2
if done {
s.activeEntries = append(s.activeEntries[:i], s.activeEntries[i+1:]...)
s.doneEntries = append(s.doneEntries, entry)
@@ -151,7 +158,9 @@ func (s *tcpStream) ReassembledSG(sg reassembly.ScatterGather, ac reassembly.Ass
}
action := result.Action
if action != ruleset.ActionMaybe && action != ruleset.ActionModify {
ctx.Verdict = actionToTCPVerdict(action)
verdict := actionToTCPVerdict(action)
s.finalVerdict = verdict
ctx.Verdict = verdict
s.logger.TCPStreamAction(s.info, action, false)
// Verdict issued, no need to process any more packets
s.closeActiveEntries()
@@ -159,6 +168,7 @@ func (s *tcpStream) ReassembledSG(sg reassembly.ScatterGather, ac reassembly.Ass
}
if len(s.activeEntries) == 0 && ctx.Verdict == tcpVerdictAccept {
// All entries are done but no verdict issued, accept stream
s.finalVerdict = tcpVerdictAcceptStream
ctx.Verdict = tcpVerdictAcceptStream
s.logger.TCPStreamAction(s.info, ruleset.ActionAllow, true)
}
@@ -174,7 +184,8 @@ func (s *tcpStream) closeActiveEntries() {
updated := false
for _, entry := range s.activeEntries {
update := entry.Stream.Close(false)
updated = updated || processPropUpdate(s.info.Props, entry.Name, update)
up := processPropUpdate(s.info.Props, entry.Name, update)
updated = updated || up
}
if updated {
s.logger.TCPStreamPropUpdate(s.info, true)

View File

@@ -65,7 +65,7 @@ func (f *udpStreamFactory) New(ipFlow, udpFlow gopacket.Flow, udp *layers.UDP, u
uc.Verdict = udpVerdictAcceptStream
f.Logger.UDPStreamAction(info, ruleset.ActionAllow, true)
// a udpStream with no activeEntries is a no-op
return &udpStream{}
return &udpStream{finalVerdict: udpVerdictAcceptStream}
}
// Create entries for each analyzer
entries := make([]*udpStreamEntry, 0, len(ans))
@@ -167,6 +167,7 @@ type udpStream struct {
ruleset ruleset.Ruleset
activeEntries []*udpStreamEntry
doneEntries []*udpStreamEntry
finalVerdict udpVerdict
}
type udpStreamEntry struct {
@@ -177,8 +178,12 @@ type udpStreamEntry struct {
}
func (s *udpStream) Accept(udp *layers.UDP, rev bool, uc *udpContext) bool {
// Only accept packets if we still have active entries
return len(s.activeEntries) > 0
if len(s.activeEntries) > 0 {
return true
} else {
uc.Verdict = s.finalVerdict
return false
}
}
func (s *udpStream) Feed(udp *layers.UDP, rev bool, uc *udpContext) {
@@ -187,8 +192,9 @@ func (s *udpStream) Feed(udp *layers.UDP, rev bool, uc *udpContext) {
// Important: reverse order so we can remove entries
entry := s.activeEntries[i]
update, closeUpdate, done := s.feedEntry(entry, rev, udp.Payload)
updated = updated || processPropUpdate(s.info.Props, entry.Name, update)
updated = updated || processPropUpdate(s.info.Props, entry.Name, closeUpdate)
up1 := processPropUpdate(s.info.Props, entry.Name, update)
up2 := processPropUpdate(s.info.Props, entry.Name, closeUpdate)
updated = updated || up1 || up2
if done {
s.activeEntries = append(s.activeEntries[:i], s.activeEntries[i+1:]...)
s.doneEntries = append(s.doneEntries, entry)
@@ -220,16 +226,18 @@ func (s *udpStream) Feed(udp *layers.UDP, rev bool, uc *udpContext) {
}
}
if action != ruleset.ActionMaybe {
var final bool
uc.Verdict, final = actionToUDPVerdict(action)
verdict, final := actionToUDPVerdict(action)
uc.Verdict = verdict
s.logger.UDPStreamAction(s.info, action, false)
if final {
s.finalVerdict = verdict
s.closeActiveEntries()
}
}
}
if len(s.activeEntries) == 0 && uc.Verdict == udpVerdictAccept {
// All entries are done but no verdict issued, accept stream
s.finalVerdict = udpVerdictAcceptStream
uc.Verdict = udpVerdictAcceptStream
s.logger.UDPStreamAction(s.info, ruleset.ActionAllow, true)
}
@@ -244,7 +252,8 @@ func (s *udpStream) closeActiveEntries() {
updated := false
for _, entry := range s.activeEntries {
update := entry.Stream.Close(false)
updated = updated || processPropUpdate(s.info.Props, entry.Name, update)
up := processPropUpdate(s.info.Props, entry.Name, update)
updated = updated || up
}
if updated {
s.logger.UDPStreamPropUpdate(s.info, true)

View File

@@ -30,7 +30,7 @@ func processPropUpdate(cpm analyzer.CombinedPropMap, name string, update *analyz
case analyzer.PropUpdateMerge:
m := cpm[name]
if m == nil {
m = make(analyzer.PropMap)
m = make(analyzer.PropMap, len(update.M))
cpm[name] = m
}
for k, v := range update.M {

View File

@@ -127,7 +127,10 @@ func (w *worker) Run(ctx context.Context) {
}
func (w *worker) UpdateRuleset(r ruleset.Ruleset) error {
return w.tcpStreamFactory.UpdateRuleset(r)
if err := w.tcpStreamFactory.UpdateRuleset(r); err != nil {
return err
}
return w.udpStreamFactory.UpdateRuleset(r)
}
func (w *worker) handle(streamID uint32, p gopacket.Packet) (io.Verdict, []byte) {

4
go.mod
View File

@@ -12,11 +12,14 @@ require (
github.com/mdlayher/netlink v1.6.0
github.com/spf13/cobra v1.8.0
github.com/spf13/viper v1.18.2
github.com/stretchr/testify v1.8.4
go.uber.org/zap v1.26.0
google.golang.org/protobuf v1.31.0
gopkg.in/yaml.v3 v3.0.1
)
require (
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
@@ -26,6 +29,7 @@ require (
github.com/mdlayher/socket v0.1.1 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/pelletier/go-toml/v2 v2.1.0 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/sagikazarmark/locafero v0.4.0 // indirect
github.com/sagikazarmark/slog-shim v0.1.0 // indirect
github.com/sourcegraph/conc v0.3.0 // indirect

7
go.sum
View File

@@ -6,6 +6,7 @@ github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46t
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/expr-lang/expr v1.15.7 h1:BK0JcWUkoW6nrbLBo6xCKhz4BvH5DSOOu1Gx5lucyZo=
github.com/expr-lang/expr v1.15.7/go.mod h1:uCkhfG+x7fcZ5A5sXHKuQ07jGZRl6J0FCAaf2k4PtVQ=
github.com/florianl/go-nfqueue v1.3.2-0.20231218173729-f2bdeb033acf h1:NqGS3vTHzVENbIfd87cXZwdpO6MB2R1PjHMJLi4Z3ow=
@@ -13,6 +14,8 @@ github.com/florianl/go-nfqueue v1.3.2-0.20231218173729-f2bdeb033acf/go.mod h1:eS
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA=
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.7/go.mod h1:n+brtR0CgQNWTVd5ZUFpTBC8YFBDLK/h/bpaJ8/DtOE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
@@ -41,6 +44,7 @@ github.com/pelletier/go-toml/v2 v2.1.0 h1:FnwAJ4oYMvbT/34k9zzHuZNrhlz48GB3/s6at6
github.com/pelletier/go-toml/v2 v2.1.0/go.mod h1:tJU2Z3ZkXwnxa4DPO899bsyIoywizdUvyaeZurnPPDc=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sagikazarmark/locafero v0.4.0 h1:HApY1R9zGo4DBgr7dqsTH/JJxLTTsOt7u6keLGt6kNQ=
@@ -111,6 +115,9 @@ golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapK
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8=
google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=

View File

@@ -4,7 +4,10 @@ import (
"context"
"encoding/binary"
"errors"
"fmt"
"os/exec"
"strconv"
"strings"
"github.com/coreos/go-iptables/iptables"
"github.com/florianl/go-nfqueue"
@@ -18,23 +21,62 @@ const (
nfqueueConnMarkAccept = 1001
nfqueueConnMarkDrop = 1002
nftFamily = "inet"
nftTable = "opengfw"
)
var nftRulesForward = fmt.Sprintf(`
define ACCEPT_CTMARK=%d
define DROP_CTMARK=%d
define QUEUE_NUM=%d
table %s %s {
chain FORWARD {
type filter hook forward priority filter; policy accept;
ct mark $ACCEPT_CTMARK counter accept
ct mark $DROP_CTMARK counter drop
counter queue num $QUEUE_NUM bypass
}
}
`, nfqueueConnMarkAccept, nfqueueConnMarkDrop, nfqueueNum, nftFamily, nftTable)
var nftRulesLocal = fmt.Sprintf(`
define ACCEPT_CTMARK=%d
define DROP_CTMARK=%d
define QUEUE_NUM=%d
table %s %s {
chain INPUT {
type filter hook input priority filter; policy accept;
ct mark $ACCEPT_CTMARK counter accept
ct mark $DROP_CTMARK counter drop
counter queue num $QUEUE_NUM bypass
}
chain OUTPUT {
type filter hook output priority filter; policy accept;
ct mark $ACCEPT_CTMARK counter accept
ct mark $DROP_CTMARK counter drop
counter queue num $QUEUE_NUM bypass
}
}
`, nfqueueConnMarkAccept, nfqueueConnMarkDrop, nfqueueNum, nftFamily, nftTable)
var iptRulesForward = []iptRule{
{"filter", "FORWARD", []string{"-m", "connmark", "--mark", strconv.Itoa(nfqueueConnMarkAccept), "-j", "ACCEPT"}},
{"filter", "FORWARD", []string{"-p", "tcp", "-m", "connmark", "--mark", strconv.Itoa(nfqueueConnMarkDrop), "-j", "REJECT", "--reject-with", "tcp-reset"}},
{"filter", "FORWARD", []string{"-m", "connmark", "--mark", strconv.Itoa(nfqueueConnMarkDrop), "-j", "DROP"}},
{"filter", "FORWARD", []string{"-j", "NFQUEUE", "--queue-num", strconv.Itoa(nfqueueNum), "--queue-bypass"}},
}
var iptRulesLocal = []iptRule{
{"filter", "INPUT", []string{"-m", "connmark", "--mark", strconv.Itoa(nfqueueConnMarkAccept), "-j", "ACCEPT"}},
{"filter", "INPUT", []string{"-p", "tcp", "-m", "connmark", "--mark", strconv.Itoa(nfqueueConnMarkDrop), "-j", "REJECT", "--reject-with", "tcp-reset"}},
{"filter", "INPUT", []string{"-m", "connmark", "--mark", strconv.Itoa(nfqueueConnMarkDrop), "-j", "DROP"}},
{"filter", "INPUT", []string{"-j", "NFQUEUE", "--queue-num", strconv.Itoa(nfqueueNum), "--queue-bypass"}},
{"filter", "OUTPUT", []string{"-m", "connmark", "--mark", strconv.Itoa(nfqueueConnMarkAccept), "-j", "ACCEPT"}},
{"filter", "OUTPUT", []string{"-p", "tcp", "-m", "connmark", "--mark", strconv.Itoa(nfqueueConnMarkDrop), "-j", "REJECT", "--reject-with", "tcp-reset"}},
{"filter", "OUTPUT", []string{"-m", "connmark", "--mark", strconv.Itoa(nfqueueConnMarkDrop), "-j", "DROP"}},
{"filter", "OUTPUT", []string{"-j", "NFQUEUE", "--queue-num", strconv.Itoa(nfqueueNum), "--queue-bypass"}},
}
@@ -46,8 +88,11 @@ var errNotNFQueuePacket = errors.New("not an NFQueue packet")
type nfqueuePacketIO struct {
n *nfqueue.Nfqueue
local bool
ipt4 *iptables.IPTables
ipt6 *iptables.IPTables
rSet bool // whether the nftables/iptables rules have been set
// iptables not nil = use iptables instead of nftables
ipt4 *iptables.IPTables
ipt6 *iptables.IPTables
}
type NFQueuePacketIOConfig struct {
@@ -59,13 +104,18 @@ func NewNFQueuePacketIO(config NFQueuePacketIOConfig) (PacketIO, error) {
if config.QueueSize == 0 {
config.QueueSize = nfqueueDefaultQueueSize
}
ipt4, err := iptables.NewWithProtocol(iptables.ProtocolIPv4)
if err != nil {
return nil, err
}
ipt6, err := iptables.NewWithProtocol(iptables.ProtocolIPv6)
if err != nil {
return nil, err
var ipt4, ipt6 *iptables.IPTables
var err error
if nftCheck() != nil {
// We prefer nftables, but if it's not available, fall back to iptables
ipt4, err = iptables.NewWithProtocol(iptables.ProtocolIPv4)
if err != nil {
return nil, err
}
ipt6, err = iptables.NewWithProtocol(iptables.ProtocolIPv6)
if err != nil {
return nil, err
}
}
n, err := nfqueue.Open(&nfqueue.Config{
NfQueue: nfqueueNum,
@@ -77,22 +127,16 @@ func NewNFQueuePacketIO(config NFQueuePacketIOConfig) (PacketIO, error) {
if err != nil {
return nil, err
}
io := &nfqueuePacketIO{
return &nfqueuePacketIO{
n: n,
local: config.Local,
ipt4: ipt4,
ipt6: ipt6,
}
err = io.setupIpt(config.Local, false)
if err != nil {
_ = n.Close()
return nil, err
}
return io, nil
}, nil
}
func (n *nfqueuePacketIO) Register(ctx context.Context, cb PacketCallback) error {
return n.n.RegisterWithErrorFunc(ctx,
err := n.n.RegisterWithErrorFunc(ctx,
func(a nfqueue.Attribute) int {
if a.PacketID == nil || a.Ct == nil || a.Payload == nil || len(*a.Payload) < 20 {
// Invalid packet, ignore
@@ -109,6 +153,21 @@ func (n *nfqueuePacketIO) Register(ctx context.Context, cb PacketCallback) error
func(e error) int {
return okBoolToInt(cb(nil, e))
})
if err != nil {
return err
}
if !n.rSet {
if n.ipt4 != nil {
err = n.setupIpt(n.local, false)
} else {
err = n.setupNft(n.local, false)
}
if err != nil {
return err
}
n.rSet = true
}
return nil
}
func (n *nfqueuePacketIO) SetVerdict(p Packet, v Verdict, newPacket []byte) error {
@@ -133,6 +192,39 @@ func (n *nfqueuePacketIO) SetVerdict(p Packet, v Verdict, newPacket []byte) erro
}
}
func (n *nfqueuePacketIO) Close() error {
if n.rSet {
if n.ipt4 != nil {
_ = n.setupIpt(n.local, true)
} else {
_ = n.setupNft(n.local, true)
}
n.rSet = false
}
return n.n.Close()
}
func (n *nfqueuePacketIO) setupNft(local, remove bool) error {
var rules string
if local {
rules = nftRulesLocal
} else {
rules = nftRulesForward
}
var err error
if remove {
err = nftDelete(nftFamily, nftTable)
} else {
// Delete first to make sure no leftover rules
_ = nftDelete(nftFamily, nftTable)
err = nftAdd(rules)
}
if err != nil {
return err
}
return nil
}
func (n *nfqueuePacketIO) setupIpt(local, remove bool) error {
var rules []iptRule
if local {
@@ -152,12 +244,6 @@ func (n *nfqueuePacketIO) setupIpt(local, remove bool) error {
return nil
}
func (n *nfqueuePacketIO) Close() error {
err := n.setupIpt(n.local, true)
_ = n.n.Close()
return err
}
var _ Packet = (*nfqueuePacket)(nil)
type nfqueuePacket struct {
@@ -182,6 +268,25 @@ func okBoolToInt(ok bool) int {
}
}
func nftCheck() error {
_, err := exec.LookPath("nft")
if err != nil {
return err
}
return nil
}
func nftAdd(input string) error {
cmd := exec.Command("nft", "-f", "-")
cmd.Stdin = strings.NewReader(input)
return cmd.Run()
}
func nftDelete(family, table string) error {
cmd := exec.Command("nft", "delete", "table", family, table)
return cmd.Run()
}
type iptRule struct {
Table, Chain string
RuleSpec []string

View File

@@ -0,0 +1,128 @@
package geo
import (
"io"
"net/http"
"os"
"time"
"github.com/apernet/OpenGFW/ruleset/builtins/geo/v2geo"
)
const (
geoipFilename = "geoip.dat"
geoipURL = "https://cdn.jsdelivr.net/gh/Loyalsoldier/v2ray-rules-dat@release/geoip.dat"
geositeFilename = "geosite.dat"
geositeURL = "https://cdn.jsdelivr.net/gh/Loyalsoldier/v2ray-rules-dat@release/geosite.dat"
geoDefaultUpdateInterval = 7 * 24 * time.Hour // 7 days
)
var _ GeoLoader = (*V2GeoLoader)(nil)
// V2GeoLoader provides the on-demand GeoIP/MatchGeoSite database
// loading functionality required by the ACL engine.
// Empty filenames = automatic download from built-in URLs.
type V2GeoLoader struct {
GeoIPFilename string
GeoSiteFilename string
UpdateInterval time.Duration
DownloadFunc func(filename, url string)
DownloadErrFunc func(err error)
geoipMap map[string]*v2geo.GeoIP
geositeMap map[string]*v2geo.GeoSite
}
func NewDefaultGeoLoader(geoSiteFilename, geoIpFilename string) *V2GeoLoader {
return &V2GeoLoader{
GeoIPFilename: geoIpFilename,
GeoSiteFilename: geoSiteFilename,
DownloadFunc: func(filename, url string) {},
DownloadErrFunc: func(err error) {},
}
}
func (l *V2GeoLoader) shouldDownload(filename string) bool {
info, err := os.Stat(filename)
if os.IsNotExist(err) {
return true
}
dt := time.Now().Sub(info.ModTime())
if l.UpdateInterval == 0 {
return dt > geoDefaultUpdateInterval
} else {
return dt > l.UpdateInterval
}
}
func (l *V2GeoLoader) download(filename, url string) error {
l.DownloadFunc(filename, url)
resp, err := http.Get(url)
if err != nil {
l.DownloadErrFunc(err)
return err
}
defer resp.Body.Close()
f, err := os.Create(filename)
if err != nil {
l.DownloadErrFunc(err)
return err
}
defer f.Close()
_, err = io.Copy(f, resp.Body)
l.DownloadErrFunc(err)
return err
}
func (l *V2GeoLoader) LoadGeoIP() (map[string]*v2geo.GeoIP, error) {
if l.geoipMap != nil {
return l.geoipMap, nil
}
autoDL := false
filename := l.GeoIPFilename
if filename == "" {
autoDL = true
filename = geoipFilename
}
if autoDL && l.shouldDownload(filename) {
err := l.download(filename, geoipURL)
if err != nil {
return nil, err
}
}
m, err := v2geo.LoadGeoIP(filename)
if err != nil {
return nil, err
}
l.geoipMap = m
return m, nil
}
func (l *V2GeoLoader) LoadGeoSite() (map[string]*v2geo.GeoSite, error) {
if l.geositeMap != nil {
return l.geositeMap, nil
}
autoDL := false
filename := l.GeoSiteFilename
if filename == "" {
autoDL = true
filename = geositeFilename
}
if autoDL && l.shouldDownload(filename) {
err := l.download(filename, geositeURL)
if err != nil {
return nil, err
}
}
m, err := v2geo.LoadGeoSite(filename)
if err != nil {
return nil, err
}
l.geositeMap = m
return m, nil
}

View File

@@ -0,0 +1,115 @@
package geo
import (
"net"
"strings"
"sync"
)
type GeoMatcher struct {
geoLoader GeoLoader
geoSiteMatcher map[string]hostMatcher
siteMatcherLock sync.Mutex
geoIpMatcher map[string]hostMatcher
ipMatcherLock sync.Mutex
}
func NewGeoMatcher(geoSiteFilename, geoIpFilename string) (*GeoMatcher, error) {
geoLoader := NewDefaultGeoLoader(geoSiteFilename, geoIpFilename)
return &GeoMatcher{
geoLoader: geoLoader,
geoSiteMatcher: make(map[string]hostMatcher),
geoIpMatcher: make(map[string]hostMatcher),
}, nil
}
func (g *GeoMatcher) MatchGeoIp(ip, condition string) bool {
g.ipMatcherLock.Lock()
defer g.ipMatcherLock.Unlock()
matcher, ok := g.geoIpMatcher[condition]
if !ok {
// GeoIP matcher
condition = strings.ToLower(condition)
country := condition
if len(country) == 0 {
return false
}
gMap, err := g.geoLoader.LoadGeoIP()
if err != nil {
return false
}
list, ok := gMap[country]
if !ok || list == nil {
return false
}
matcher, err = newGeoIPMatcher(list)
if err != nil {
return false
}
g.geoIpMatcher[condition] = matcher
}
parseIp := net.ParseIP(ip)
if parseIp == nil {
return false
}
ipv4 := parseIp.To4()
if ipv4 != nil {
return matcher.Match(HostInfo{IPv4: ipv4})
}
ipv6 := parseIp.To16()
if ipv6 != nil {
return matcher.Match(HostInfo{IPv6: ipv6})
}
return false
}
func (g *GeoMatcher) MatchGeoSite(site, condition string) bool {
g.siteMatcherLock.Lock()
defer g.siteMatcherLock.Unlock()
matcher, ok := g.geoSiteMatcher[condition]
if !ok {
// MatchGeoSite matcher
condition = strings.ToLower(condition)
name, attrs := parseGeoSiteName(condition)
if len(name) == 0 {
return false
}
gMap, err := g.geoLoader.LoadGeoSite()
if err != nil {
return false
}
list, ok := gMap[name]
if !ok || list == nil {
return false
}
matcher, err = newGeositeMatcher(list, attrs)
if err != nil {
return false
}
g.geoSiteMatcher[condition] = matcher
}
return matcher.Match(HostInfo{Name: site})
}
func (g *GeoMatcher) LoadGeoSite() error {
_, err := g.geoLoader.LoadGeoSite()
return err
}
func (g *GeoMatcher) LoadGeoIP() error {
_, err := g.geoLoader.LoadGeoIP()
return err
}
func parseGeoSiteName(s string) (string, []string) {
parts := strings.Split(s, "@")
base := strings.TrimSpace(parts[0])
attrs := parts[1:]
for i := range attrs {
attrs[i] = strings.TrimSpace(attrs[i])
}
return base, attrs
}

View File

@@ -0,0 +1,27 @@
package geo
import (
"fmt"
"net"
"github.com/apernet/OpenGFW/ruleset/builtins/geo/v2geo"
)
type HostInfo struct {
Name string
IPv4 net.IP
IPv6 net.IP
}
func (h HostInfo) String() string {
return fmt.Sprintf("%s|%s|%s", h.Name, h.IPv4, h.IPv6)
}
type GeoLoader interface {
LoadGeoIP() (map[string]*v2geo.GeoIP, error)
LoadGeoSite() (map[string]*v2geo.GeoSite, error)
}
type hostMatcher interface {
Match(HostInfo) bool
}

View File

@@ -0,0 +1,213 @@
package geo
import (
"bytes"
"errors"
"net"
"regexp"
"sort"
"strings"
"github.com/apernet/OpenGFW/ruleset/builtins/geo/v2geo"
)
var _ hostMatcher = (*geoipMatcher)(nil)
type geoipMatcher struct {
N4 []*net.IPNet // sorted
N6 []*net.IPNet // sorted
Inverse bool
}
// matchIP tries to match the given IP address with the corresponding IPNets.
// Note that this function does NOT handle the Inverse flag.
func (m *geoipMatcher) matchIP(ip net.IP) bool {
var n []*net.IPNet
if ip4 := ip.To4(); ip4 != nil {
// N4 stores IPv4 addresses in 4-byte form.
// Make sure we use it here too, otherwise bytes.Compare will fail.
ip = ip4
n = m.N4
} else {
n = m.N6
}
left, right := 0, len(n)-1
for left <= right {
mid := (left + right) / 2
if n[mid].Contains(ip) {
return true
} else if bytes.Compare(n[mid].IP, ip) < 0 {
left = mid + 1
} else {
right = mid - 1
}
}
return false
}
func (m *geoipMatcher) Match(host HostInfo) bool {
if host.IPv4 != nil {
if m.matchIP(host.IPv4) {
return !m.Inverse
}
}
if host.IPv6 != nil {
if m.matchIP(host.IPv6) {
return !m.Inverse
}
}
return m.Inverse
}
func newGeoIPMatcher(list *v2geo.GeoIP) (*geoipMatcher, error) {
n4 := make([]*net.IPNet, 0)
n6 := make([]*net.IPNet, 0)
for _, cidr := range list.Cidr {
if len(cidr.Ip) == 4 {
// IPv4
n4 = append(n4, &net.IPNet{
IP: cidr.Ip,
Mask: net.CIDRMask(int(cidr.Prefix), 32),
})
} else if len(cidr.Ip) == 16 {
// IPv6
n6 = append(n6, &net.IPNet{
IP: cidr.Ip,
Mask: net.CIDRMask(int(cidr.Prefix), 128),
})
} else {
return nil, errors.New("invalid IP length")
}
}
// Sort the IPNets, so we can do binary search later.
sort.Slice(n4, func(i, j int) bool {
return bytes.Compare(n4[i].IP, n4[j].IP) < 0
})
sort.Slice(n6, func(i, j int) bool {
return bytes.Compare(n6[i].IP, n6[j].IP) < 0
})
return &geoipMatcher{
N4: n4,
N6: n6,
Inverse: list.InverseMatch,
}, nil
}
var _ hostMatcher = (*geositeMatcher)(nil)
type geositeDomainType int
const (
geositeDomainPlain geositeDomainType = iota
geositeDomainRegex
geositeDomainRoot
geositeDomainFull
)
type geositeDomain struct {
Type geositeDomainType
Value string
Regex *regexp.Regexp
Attrs map[string]bool
}
type geositeMatcher struct {
Domains []geositeDomain
// Attributes are matched using "and" logic - if you have multiple attributes here,
// a domain must have all of those attributes to be considered a match.
Attrs []string
}
func (m *geositeMatcher) matchDomain(domain geositeDomain, host HostInfo) bool {
// Match attributes first
if len(m.Attrs) > 0 {
if len(domain.Attrs) == 0 {
return false
}
for _, attr := range m.Attrs {
if !domain.Attrs[attr] {
return false
}
}
}
switch domain.Type {
case geositeDomainPlain:
return strings.Contains(host.Name, domain.Value)
case geositeDomainRegex:
if domain.Regex != nil {
return domain.Regex.MatchString(host.Name)
}
case geositeDomainFull:
return host.Name == domain.Value
case geositeDomainRoot:
if host.Name == domain.Value {
return true
}
return strings.HasSuffix(host.Name, "."+domain.Value)
default:
return false
}
return false
}
func (m *geositeMatcher) Match(host HostInfo) bool {
for _, domain := range m.Domains {
if m.matchDomain(domain, host) {
return true
}
}
return false
}
func newGeositeMatcher(list *v2geo.GeoSite, attrs []string) (*geositeMatcher, error) {
domains := make([]geositeDomain, len(list.Domain))
for i, domain := range list.Domain {
switch domain.Type {
case v2geo.Domain_Plain:
domains[i] = geositeDomain{
Type: geositeDomainPlain,
Value: domain.Value,
Attrs: domainAttributeToMap(domain.Attribute),
}
case v2geo.Domain_Regex:
regex, err := regexp.Compile(domain.Value)
if err != nil {
return nil, err
}
domains[i] = geositeDomain{
Type: geositeDomainRegex,
Regex: regex,
Attrs: domainAttributeToMap(domain.Attribute),
}
case v2geo.Domain_Full:
domains[i] = geositeDomain{
Type: geositeDomainFull,
Value: domain.Value,
Attrs: domainAttributeToMap(domain.Attribute),
}
case v2geo.Domain_RootDomain:
domains[i] = geositeDomain{
Type: geositeDomainRoot,
Value: domain.Value,
Attrs: domainAttributeToMap(domain.Attribute),
}
default:
return nil, errors.New("unsupported domain type")
}
}
return &geositeMatcher{
Domains: domains,
Attrs: attrs,
}, nil
}
func domainAttributeToMap(attrs []*v2geo.Domain_Attribute) map[string]bool {
m := make(map[string]bool)
for _, attr := range attrs {
// Supposedly there are also int attributes,
// but nobody seems to use them, so we treat everything as boolean for now.
m[attr.Key] = true
}
return m
}

View File

@@ -0,0 +1,44 @@
package v2geo
import (
"os"
"strings"
"google.golang.org/protobuf/proto"
)
// LoadGeoIP loads a GeoIP data file and converts it to a map.
// The keys of the map (country codes) are all normalized to lowercase.
func LoadGeoIP(filename string) (map[string]*GeoIP, error) {
bs, err := os.ReadFile(filename)
if err != nil {
return nil, err
}
var list GeoIPList
if err := proto.Unmarshal(bs, &list); err != nil {
return nil, err
}
m := make(map[string]*GeoIP)
for _, entry := range list.Entry {
m[strings.ToLower(entry.CountryCode)] = entry
}
return m, nil
}
// LoadGeoSite loads a GeoSite data file and converts it to a map.
// The keys of the map (site keys) are all normalized to lowercase.
func LoadGeoSite(filename string) (map[string]*GeoSite, error) {
bs, err := os.ReadFile(filename)
if err != nil {
return nil, err
}
var list GeoSiteList
if err := proto.Unmarshal(bs, &list); err != nil {
return nil, err
}
m := make(map[string]*GeoSite)
for _, entry := range list.Entry {
m[strings.ToLower(entry.CountryCode)] = entry
}
return m, nil
}

View File

@@ -0,0 +1,54 @@
package v2geo
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestLoadGeoIP(t *testing.T) {
m, err := LoadGeoIP("geoip.dat")
assert.NoError(t, err)
// Exact checks since we know the data.
assert.Len(t, m, 252)
assert.Equal(t, m["cn"].CountryCode, "CN")
assert.Len(t, m["cn"].Cidr, 10407)
assert.Equal(t, m["us"].CountryCode, "US")
assert.Len(t, m["us"].Cidr, 193171)
assert.Equal(t, m["private"].CountryCode, "PRIVATE")
assert.Len(t, m["private"].Cidr, 18)
assert.Contains(t, m["private"].Cidr, &CIDR{
Ip: []byte("\xc0\xa8\x00\x00"),
Prefix: 16,
})
}
func TestLoadGeoSite(t *testing.T) {
m, err := LoadGeoSite("geosite.dat")
assert.NoError(t, err)
// Exact checks since we know the data.
assert.Len(t, m, 1204)
assert.Equal(t, m["netflix"].CountryCode, "NETFLIX")
assert.Len(t, m["netflix"].Domain, 25)
assert.Contains(t, m["netflix"].Domain, &Domain{
Type: Domain_Full,
Value: "netflix.com.edgesuite.net",
})
assert.Contains(t, m["netflix"].Domain, &Domain{
Type: Domain_RootDomain,
Value: "fast.com",
})
assert.Len(t, m["google"].Domain, 1066)
assert.Contains(t, m["google"].Domain, &Domain{
Type: Domain_RootDomain,
Value: "ggpht.cn",
Attribute: []*Domain_Attribute{
{
Key: "cn",
TypedValue: &Domain_Attribute_BoolValue{BoolValue: true},
},
},
})
}

View File

@@ -0,0 +1,745 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.31.0
// protoc v4.24.4
// source: v2geo.proto
package v2geo
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
// Type of domain value.
type Domain_Type int32
const (
// The value is used as is.
Domain_Plain Domain_Type = 0
// The value is used as a regular expression.
Domain_Regex Domain_Type = 1
// The value is a root domain.
Domain_RootDomain Domain_Type = 2
// The value is a domain.
Domain_Full Domain_Type = 3
)
// Enum value maps for Domain_Type.
var (
Domain_Type_name = map[int32]string{
0: "Plain",
1: "Regex",
2: "RootDomain",
3: "Full",
}
Domain_Type_value = map[string]int32{
"Plain": 0,
"Regex": 1,
"RootDomain": 2,
"Full": 3,
}
)
func (x Domain_Type) Enum() *Domain_Type {
p := new(Domain_Type)
*p = x
return p
}
func (x Domain_Type) String() string {
return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
}
func (Domain_Type) Descriptor() protoreflect.EnumDescriptor {
return file_v2geo_proto_enumTypes[0].Descriptor()
}
func (Domain_Type) Type() protoreflect.EnumType {
return &file_v2geo_proto_enumTypes[0]
}
func (x Domain_Type) Number() protoreflect.EnumNumber {
return protoreflect.EnumNumber(x)
}
// Deprecated: Use Domain_Type.Descriptor instead.
func (Domain_Type) EnumDescriptor() ([]byte, []int) {
return file_v2geo_proto_rawDescGZIP(), []int{0, 0}
}
// Domain for routing decision.
type Domain struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// Domain matching type.
Type Domain_Type `protobuf:"varint,1,opt,name=type,proto3,enum=Domain_Type" json:"type,omitempty"`
// Domain value.
Value string `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
// Attributes of this domain. May be used for filtering.
Attribute []*Domain_Attribute `protobuf:"bytes,3,rep,name=attribute,proto3" json:"attribute,omitempty"`
}
func (x *Domain) Reset() {
*x = Domain{}
if protoimpl.UnsafeEnabled {
mi := &file_v2geo_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Domain) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Domain) ProtoMessage() {}
func (x *Domain) ProtoReflect() protoreflect.Message {
mi := &file_v2geo_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Domain.ProtoReflect.Descriptor instead.
func (*Domain) Descriptor() ([]byte, []int) {
return file_v2geo_proto_rawDescGZIP(), []int{0}
}
func (x *Domain) GetType() Domain_Type {
if x != nil {
return x.Type
}
return Domain_Plain
}
func (x *Domain) GetValue() string {
if x != nil {
return x.Value
}
return ""
}
func (x *Domain) GetAttribute() []*Domain_Attribute {
if x != nil {
return x.Attribute
}
return nil
}
// IP for routing decision, in CIDR form.
type CIDR struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// IP address, should be either 4 or 16 bytes.
Ip []byte `protobuf:"bytes,1,opt,name=ip,proto3" json:"ip,omitempty"`
// Number of leading ones in the network mask.
Prefix uint32 `protobuf:"varint,2,opt,name=prefix,proto3" json:"prefix,omitempty"`
}
func (x *CIDR) Reset() {
*x = CIDR{}
if protoimpl.UnsafeEnabled {
mi := &file_v2geo_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *CIDR) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CIDR) ProtoMessage() {}
func (x *CIDR) ProtoReflect() protoreflect.Message {
mi := &file_v2geo_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CIDR.ProtoReflect.Descriptor instead.
func (*CIDR) Descriptor() ([]byte, []int) {
return file_v2geo_proto_rawDescGZIP(), []int{1}
}
func (x *CIDR) GetIp() []byte {
if x != nil {
return x.Ip
}
return nil
}
func (x *CIDR) GetPrefix() uint32 {
if x != nil {
return x.Prefix
}
return 0
}
type GeoIP struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
CountryCode string `protobuf:"bytes,1,opt,name=country_code,json=countryCode,proto3" json:"country_code,omitempty"`
Cidr []*CIDR `protobuf:"bytes,2,rep,name=cidr,proto3" json:"cidr,omitempty"`
InverseMatch bool `protobuf:"varint,3,opt,name=inverse_match,json=inverseMatch,proto3" json:"inverse_match,omitempty"`
// resource_hash instruct simplified config converter to load domain from geo file.
ResourceHash []byte `protobuf:"bytes,4,opt,name=resource_hash,json=resourceHash,proto3" json:"resource_hash,omitempty"`
Code string `protobuf:"bytes,5,opt,name=code,proto3" json:"code,omitempty"`
}
func (x *GeoIP) Reset() {
*x = GeoIP{}
if protoimpl.UnsafeEnabled {
mi := &file_v2geo_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *GeoIP) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*GeoIP) ProtoMessage() {}
func (x *GeoIP) ProtoReflect() protoreflect.Message {
mi := &file_v2geo_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use GeoIP.ProtoReflect.Descriptor instead.
func (*GeoIP) Descriptor() ([]byte, []int) {
return file_v2geo_proto_rawDescGZIP(), []int{2}
}
func (x *GeoIP) GetCountryCode() string {
if x != nil {
return x.CountryCode
}
return ""
}
func (x *GeoIP) GetCidr() []*CIDR {
if x != nil {
return x.Cidr
}
return nil
}
func (x *GeoIP) GetInverseMatch() bool {
if x != nil {
return x.InverseMatch
}
return false
}
func (x *GeoIP) GetResourceHash() []byte {
if x != nil {
return x.ResourceHash
}
return nil
}
func (x *GeoIP) GetCode() string {
if x != nil {
return x.Code
}
return ""
}
type GeoIPList struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Entry []*GeoIP `protobuf:"bytes,1,rep,name=entry,proto3" json:"entry,omitempty"`
}
func (x *GeoIPList) Reset() {
*x = GeoIPList{}
if protoimpl.UnsafeEnabled {
mi := &file_v2geo_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *GeoIPList) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*GeoIPList) ProtoMessage() {}
func (x *GeoIPList) ProtoReflect() protoreflect.Message {
mi := &file_v2geo_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use GeoIPList.ProtoReflect.Descriptor instead.
func (*GeoIPList) Descriptor() ([]byte, []int) {
return file_v2geo_proto_rawDescGZIP(), []int{3}
}
func (x *GeoIPList) GetEntry() []*GeoIP {
if x != nil {
return x.Entry
}
return nil
}
type GeoSite struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
CountryCode string `protobuf:"bytes,1,opt,name=country_code,json=countryCode,proto3" json:"country_code,omitempty"`
Domain []*Domain `protobuf:"bytes,2,rep,name=domain,proto3" json:"domain,omitempty"`
// resource_hash instruct simplified config converter to load domain from geo file.
ResourceHash []byte `protobuf:"bytes,3,opt,name=resource_hash,json=resourceHash,proto3" json:"resource_hash,omitempty"`
Code string `protobuf:"bytes,4,opt,name=code,proto3" json:"code,omitempty"`
}
func (x *GeoSite) Reset() {
*x = GeoSite{}
if protoimpl.UnsafeEnabled {
mi := &file_v2geo_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *GeoSite) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*GeoSite) ProtoMessage() {}
func (x *GeoSite) ProtoReflect() protoreflect.Message {
mi := &file_v2geo_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use MatchGeoSite.ProtoReflect.Descriptor instead.
func (*GeoSite) Descriptor() ([]byte, []int) {
return file_v2geo_proto_rawDescGZIP(), []int{4}
}
func (x *GeoSite) GetCountryCode() string {
if x != nil {
return x.CountryCode
}
return ""
}
func (x *GeoSite) GetDomain() []*Domain {
if x != nil {
return x.Domain
}
return nil
}
func (x *GeoSite) GetResourceHash() []byte {
if x != nil {
return x.ResourceHash
}
return nil
}
func (x *GeoSite) GetCode() string {
if x != nil {
return x.Code
}
return ""
}
type GeoSiteList struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Entry []*GeoSite `protobuf:"bytes,1,rep,name=entry,proto3" json:"entry,omitempty"`
}
func (x *GeoSiteList) Reset() {
*x = GeoSiteList{}
if protoimpl.UnsafeEnabled {
mi := &file_v2geo_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *GeoSiteList) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*GeoSiteList) ProtoMessage() {}
func (x *GeoSiteList) ProtoReflect() protoreflect.Message {
mi := &file_v2geo_proto_msgTypes[5]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use GeoSiteList.ProtoReflect.Descriptor instead.
func (*GeoSiteList) Descriptor() ([]byte, []int) {
return file_v2geo_proto_rawDescGZIP(), []int{5}
}
func (x *GeoSiteList) GetEntry() []*GeoSite {
if x != nil {
return x.Entry
}
return nil
}
type Domain_Attribute struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"`
// Types that are assignable to TypedValue:
//
// *Domain_Attribute_BoolValue
// *Domain_Attribute_IntValue
TypedValue isDomain_Attribute_TypedValue `protobuf_oneof:"typed_value"`
}
func (x *Domain_Attribute) Reset() {
*x = Domain_Attribute{}
if protoimpl.UnsafeEnabled {
mi := &file_v2geo_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Domain_Attribute) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Domain_Attribute) ProtoMessage() {}
func (x *Domain_Attribute) ProtoReflect() protoreflect.Message {
mi := &file_v2geo_proto_msgTypes[6]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Domain_Attribute.ProtoReflect.Descriptor instead.
func (*Domain_Attribute) Descriptor() ([]byte, []int) {
return file_v2geo_proto_rawDescGZIP(), []int{0, 0}
}
func (x *Domain_Attribute) GetKey() string {
if x != nil {
return x.Key
}
return ""
}
func (m *Domain_Attribute) GetTypedValue() isDomain_Attribute_TypedValue {
if m != nil {
return m.TypedValue
}
return nil
}
func (x *Domain_Attribute) GetBoolValue() bool {
if x, ok := x.GetTypedValue().(*Domain_Attribute_BoolValue); ok {
return x.BoolValue
}
return false
}
func (x *Domain_Attribute) GetIntValue() int64 {
if x, ok := x.GetTypedValue().(*Domain_Attribute_IntValue); ok {
return x.IntValue
}
return 0
}
type isDomain_Attribute_TypedValue interface {
isDomain_Attribute_TypedValue()
}
type Domain_Attribute_BoolValue struct {
BoolValue bool `protobuf:"varint,2,opt,name=bool_value,json=boolValue,proto3,oneof"`
}
type Domain_Attribute_IntValue struct {
IntValue int64 `protobuf:"varint,3,opt,name=int_value,json=intValue,proto3,oneof"`
}
func (*Domain_Attribute_BoolValue) isDomain_Attribute_TypedValue() {}
func (*Domain_Attribute_IntValue) isDomain_Attribute_TypedValue() {}
var File_v2geo_proto protoreflect.FileDescriptor
var file_v2geo_proto_rawDesc = []byte{
0x0a, 0x0b, 0x76, 0x32, 0x67, 0x65, 0x6f, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x97, 0x02,
0x0a, 0x06, 0x44, 0x6f, 0x6d, 0x61, 0x69, 0x6e, 0x12, 0x20, 0x0a, 0x04, 0x74, 0x79, 0x70, 0x65,
0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x0c, 0x2e, 0x44, 0x6f, 0x6d, 0x61, 0x69, 0x6e, 0x2e,
0x54, 0x79, 0x70, 0x65, 0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61,
0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65,
0x12, 0x2f, 0x0a, 0x09, 0x61, 0x74, 0x74, 0x72, 0x69, 0x62, 0x75, 0x74, 0x65, 0x18, 0x03, 0x20,
0x03, 0x28, 0x0b, 0x32, 0x11, 0x2e, 0x44, 0x6f, 0x6d, 0x61, 0x69, 0x6e, 0x2e, 0x41, 0x74, 0x74,
0x72, 0x69, 0x62, 0x75, 0x74, 0x65, 0x52, 0x09, 0x61, 0x74, 0x74, 0x72, 0x69, 0x62, 0x75, 0x74,
0x65, 0x1a, 0x6c, 0x0a, 0x09, 0x41, 0x74, 0x74, 0x72, 0x69, 0x62, 0x75, 0x74, 0x65, 0x12, 0x10,
0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79,
0x12, 0x1f, 0x0a, 0x0a, 0x62, 0x6f, 0x6f, 0x6c, 0x5f, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02,
0x20, 0x01, 0x28, 0x08, 0x48, 0x00, 0x52, 0x09, 0x62, 0x6f, 0x6f, 0x6c, 0x56, 0x61, 0x6c, 0x75,
0x65, 0x12, 0x1d, 0x0a, 0x09, 0x69, 0x6e, 0x74, 0x5f, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x03,
0x20, 0x01, 0x28, 0x03, 0x48, 0x00, 0x52, 0x08, 0x69, 0x6e, 0x74, 0x56, 0x61, 0x6c, 0x75, 0x65,
0x42, 0x0d, 0x0a, 0x0b, 0x74, 0x79, 0x70, 0x65, 0x64, 0x5f, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x22,
0x36, 0x0a, 0x04, 0x54, 0x79, 0x70, 0x65, 0x12, 0x09, 0x0a, 0x05, 0x50, 0x6c, 0x61, 0x69, 0x6e,
0x10, 0x00, 0x12, 0x09, 0x0a, 0x05, 0x52, 0x65, 0x67, 0x65, 0x78, 0x10, 0x01, 0x12, 0x0e, 0x0a,
0x0a, 0x52, 0x6f, 0x6f, 0x74, 0x44, 0x6f, 0x6d, 0x61, 0x69, 0x6e, 0x10, 0x02, 0x12, 0x08, 0x0a,
0x04, 0x46, 0x75, 0x6c, 0x6c, 0x10, 0x03, 0x22, 0x2e, 0x0a, 0x04, 0x43, 0x49, 0x44, 0x52, 0x12,
0x0e, 0x0a, 0x02, 0x69, 0x70, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x02, 0x69, 0x70, 0x12,
0x16, 0x0a, 0x06, 0x70, 0x72, 0x65, 0x66, 0x69, 0x78, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0d, 0x52,
0x06, 0x70, 0x72, 0x65, 0x66, 0x69, 0x78, 0x22, 0xa3, 0x01, 0x0a, 0x05, 0x47, 0x65, 0x6f, 0x49,
0x50, 0x12, 0x21, 0x0a, 0x0c, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x72, 0x79, 0x5f, 0x63, 0x6f, 0x64,
0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x72, 0x79,
0x43, 0x6f, 0x64, 0x65, 0x12, 0x19, 0x0a, 0x04, 0x63, 0x69, 0x64, 0x72, 0x18, 0x02, 0x20, 0x03,
0x28, 0x0b, 0x32, 0x05, 0x2e, 0x43, 0x49, 0x44, 0x52, 0x52, 0x04, 0x63, 0x69, 0x64, 0x72, 0x12,
0x23, 0x0a, 0x0d, 0x69, 0x6e, 0x76, 0x65, 0x72, 0x73, 0x65, 0x5f, 0x6d, 0x61, 0x74, 0x63, 0x68,
0x18, 0x03, 0x20, 0x01, 0x28, 0x08, 0x52, 0x0c, 0x69, 0x6e, 0x76, 0x65, 0x72, 0x73, 0x65, 0x4d,
0x61, 0x74, 0x63, 0x68, 0x12, 0x23, 0x0a, 0x0d, 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65,
0x5f, 0x68, 0x61, 0x73, 0x68, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x0c, 0x72, 0x65, 0x73,
0x6f, 0x75, 0x72, 0x63, 0x65, 0x48, 0x61, 0x73, 0x68, 0x12, 0x12, 0x0a, 0x04, 0x63, 0x6f, 0x64,
0x65, 0x18, 0x05, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x63, 0x6f, 0x64, 0x65, 0x22, 0x29, 0x0a,
0x09, 0x47, 0x65, 0x6f, 0x49, 0x50, 0x4c, 0x69, 0x73, 0x74, 0x12, 0x1c, 0x0a, 0x05, 0x65, 0x6e,
0x74, 0x72, 0x79, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x06, 0x2e, 0x47, 0x65, 0x6f, 0x49,
0x50, 0x52, 0x05, 0x65, 0x6e, 0x74, 0x72, 0x79, 0x22, 0x86, 0x01, 0x0a, 0x07, 0x47, 0x65, 0x6f,
0x53, 0x69, 0x74, 0x65, 0x12, 0x21, 0x0a, 0x0c, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x72, 0x79, 0x5f,
0x63, 0x6f, 0x64, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x63, 0x6f, 0x75, 0x6e,
0x74, 0x72, 0x79, 0x43, 0x6f, 0x64, 0x65, 0x12, 0x1f, 0x0a, 0x06, 0x64, 0x6f, 0x6d, 0x61, 0x69,
0x6e, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x07, 0x2e, 0x44, 0x6f, 0x6d, 0x61, 0x69, 0x6e,
0x52, 0x06, 0x64, 0x6f, 0x6d, 0x61, 0x69, 0x6e, 0x12, 0x23, 0x0a, 0x0d, 0x72, 0x65, 0x73, 0x6f,
0x75, 0x72, 0x63, 0x65, 0x5f, 0x68, 0x61, 0x73, 0x68, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0c, 0x52,
0x0c, 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x48, 0x61, 0x73, 0x68, 0x12, 0x12, 0x0a,
0x04, 0x63, 0x6f, 0x64, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x63, 0x6f, 0x64,
0x65, 0x22, 0x2d, 0x0a, 0x0b, 0x47, 0x65, 0x6f, 0x53, 0x69, 0x74, 0x65, 0x4c, 0x69, 0x73, 0x74,
0x12, 0x1e, 0x0a, 0x05, 0x65, 0x6e, 0x74, 0x72, 0x79, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32,
0x08, 0x2e, 0x47, 0x65, 0x6f, 0x53, 0x69, 0x74, 0x65, 0x52, 0x05, 0x65, 0x6e, 0x74, 0x72, 0x79,
0x42, 0x09, 0x5a, 0x07, 0x2e, 0x2f, 0x76, 0x32, 0x67, 0x65, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f,
0x74, 0x6f, 0x33,
}
var (
file_v2geo_proto_rawDescOnce sync.Once
file_v2geo_proto_rawDescData = file_v2geo_proto_rawDesc
)
func file_v2geo_proto_rawDescGZIP() []byte {
file_v2geo_proto_rawDescOnce.Do(func() {
file_v2geo_proto_rawDescData = protoimpl.X.CompressGZIP(file_v2geo_proto_rawDescData)
})
return file_v2geo_proto_rawDescData
}
var file_v2geo_proto_enumTypes = make([]protoimpl.EnumInfo, 1)
var file_v2geo_proto_msgTypes = make([]protoimpl.MessageInfo, 7)
var file_v2geo_proto_goTypes = []interface{}{
(Domain_Type)(0), // 0: Domain.Type
(*Domain)(nil), // 1: Domain
(*CIDR)(nil), // 2: CIDR
(*GeoIP)(nil), // 3: GeoIP
(*GeoIPList)(nil), // 4: GeoIPList
(*GeoSite)(nil), // 5: MatchGeoSite
(*GeoSiteList)(nil), // 6: GeoSiteList
(*Domain_Attribute)(nil), // 7: Domain.Attribute
}
var file_v2geo_proto_depIdxs = []int32{
0, // 0: Domain.type:type_name -> Domain.Type
7, // 1: Domain.attribute:type_name -> Domain.Attribute
2, // 2: GeoIP.cidr:type_name -> CIDR
3, // 3: GeoIPList.entry:type_name -> GeoIP
1, // 4: MatchGeoSite.domain:type_name -> Domain
5, // 5: GeoSiteList.entry:type_name -> MatchGeoSite
6, // [6:6] is the sub-list for method output_type
6, // [6:6] is the sub-list for method input_type
6, // [6:6] is the sub-list for extension type_name
6, // [6:6] is the sub-list for extension extendee
0, // [0:6] is the sub-list for field type_name
}
func init() { file_v2geo_proto_init() }
func file_v2geo_proto_init() {
if File_v2geo_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_v2geo_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Domain); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_v2geo_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*CIDR); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_v2geo_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*GeoIP); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_v2geo_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*GeoIPList); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_v2geo_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*GeoSite); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_v2geo_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*GeoSiteList); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_v2geo_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Domain_Attribute); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
file_v2geo_proto_msgTypes[6].OneofWrappers = []interface{}{
(*Domain_Attribute_BoolValue)(nil),
(*Domain_Attribute_IntValue)(nil),
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_v2geo_proto_rawDesc,
NumEnums: 1,
NumMessages: 7,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_v2geo_proto_goTypes,
DependencyIndexes: file_v2geo_proto_depIdxs,
EnumInfos: file_v2geo_proto_enumTypes,
MessageInfos: file_v2geo_proto_msgTypes,
}.Build()
File_v2geo_proto = out.File
file_v2geo_proto_rawDesc = nil
file_v2geo_proto_goTypes = nil
file_v2geo_proto_depIdxs = nil
}

View File

@@ -0,0 +1,76 @@
syntax = "proto3";
option go_package = "./v2geo";
// This file is copied from
// https://github.com/v2fly/v2ray-core/blob/master/app/router/routercommon/common.proto
// with some modifications.
// Domain for routing decision.
message Domain {
// Type of domain value.
enum Type {
// The value is used as is.
Plain = 0;
// The value is used as a regular expression.
Regex = 1;
// The value is a root domain.
RootDomain = 2;
// The value is a domain.
Full = 3;
}
// Domain matching type.
Type type = 1;
// Domain value.
string value = 2;
message Attribute {
string key = 1;
oneof typed_value {
bool bool_value = 2;
int64 int_value = 3;
}
}
// Attributes of this domain. May be used for filtering.
repeated Attribute attribute = 3;
}
// IP for routing decision, in CIDR form.
message CIDR {
// IP address, should be either 4 or 16 bytes.
bytes ip = 1;
// Number of leading ones in the network mask.
uint32 prefix = 2;
}
message GeoIP {
string country_code = 1;
repeated CIDR cidr = 2;
bool inverse_match = 3;
// resource_hash instruct simplified config converter to load domain from geo file.
bytes resource_hash = 4;
string code = 5;
}
message GeoIPList {
repeated GeoIP entry = 1;
}
message GeoSite {
string country_code = 1;
repeated Domain domain = 2;
// resource_hash instruct simplified config converter to load domain from geo file.
bytes resource_hash = 3;
string code = 4;
}
message GeoSiteList {
repeated GeoSite entry = 1;
}

View File

@@ -14,6 +14,7 @@ import (
"github.com/apernet/OpenGFW/analyzer"
"github.com/apernet/OpenGFW/modifier"
"github.com/apernet/OpenGFW/ruleset/builtins/geo"
)
// ExprRule is the external representation of an expression rule.
@@ -45,14 +46,14 @@ type compiledExprRule struct {
Action Action
ModInstance modifier.Instance
Program *vm.Program
Analyzers map[string]struct{}
}
var _ Ruleset = (*exprRuleset)(nil)
type exprRuleset struct {
Rules []compiledExprRule
Ans []analyzer.Analyzer
Rules []compiledExprRule
Ans []analyzer.Analyzer
GeoMatcher *geo.GeoMatcher
}
func (r *exprRuleset) Analyzers(info StreamInfo) []analyzer.Analyzer {
@@ -83,40 +84,60 @@ func (r *exprRuleset) Match(info StreamInfo) (MatchResult, error) {
// CompileExprRules compiles a list of expression rules into a ruleset.
// It returns an error if any of the rules are invalid, or if any of the analyzers
// used by the rules are unknown (not provided in the analyzer list).
func CompileExprRules(rules []ExprRule, ans []analyzer.Analyzer, mods []modifier.Modifier) (Ruleset, error) {
func CompileExprRules(rules []ExprRule, ans []analyzer.Analyzer, mods []modifier.Modifier, config *BuiltinConfig) (Ruleset, error) {
var compiledRules []compiledExprRule
fullAnMap := analyzersToMap(ans)
fullModMap := modifiersToMap(mods)
depAnMap := make(map[string]analyzer.Analyzer)
geoMatcher, err := geo.NewGeoMatcher(config.GeoSiteFilename, config.GeoIpFilename)
if err != nil {
return nil, err
}
// Compile all rules and build a map of analyzers that are used by the rules.
for _, rule := range rules {
action, ok := actionStringToAction(rule.Action)
if !ok {
return nil, fmt.Errorf("rule %q has invalid action %q", rule.Name, rule.Action)
}
visitor := &depVisitor{Analyzers: make(map[string]struct{})}
visitor := &idVisitor{Identifiers: make(map[string]bool)}
program, err := expr.Compile(rule.Expr,
func(c *conf.Config) {
c.Strict = false
c.Expect = reflect.Bool
c.Visitors = append(c.Visitors, visitor)
registerBuiltinFunctions(c.Functions, geoMatcher)
},
)
if err != nil {
return nil, fmt.Errorf("rule %q has invalid expression: %w", rule.Name, err)
}
for name := range visitor.Analyzers {
a, ok := fullAnMap[name]
if !ok && !isBuiltInAnalyzer(name) {
return nil, fmt.Errorf("rule %q uses unknown analyzer %q", rule.Name, name)
for name := range visitor.Identifiers {
if isBuiltInAnalyzer(name) {
continue
}
// Check if it's one of the built-in functions, and if so,
// skip it as an analyzer & do initialization if necessary.
switch name {
case "geoip":
if err := geoMatcher.LoadGeoIP(); err != nil {
return nil, fmt.Errorf("rule %q failed to load geoip: %w", rule.Name, err)
}
case "geosite":
if err := geoMatcher.LoadGeoSite(); err != nil {
return nil, fmt.Errorf("rule %q failed to load geosite: %w", rule.Name, err)
}
default:
a, ok := fullAnMap[name]
if !ok {
return nil, fmt.Errorf("rule %q uses unknown analyzer %q", rule.Name, name)
}
depAnMap[name] = a
}
depAnMap[name] = a
}
cr := compiledExprRule{
Name: rule.Name,
Action: action,
Program: program,
Analyzers: visitor.Analyzers,
Name: rule.Name,
Action: action,
Program: program,
}
if action == ActionModify {
mod, ok := fullModMap[rule.Modifier.Name]
@@ -137,11 +158,29 @@ func CompileExprRules(rules []ExprRule, ans []analyzer.Analyzer, mods []modifier
depAns = append(depAns, a)
}
return &exprRuleset{
Rules: compiledRules,
Ans: depAns,
Rules: compiledRules,
Ans: depAns,
GeoMatcher: geoMatcher,
}, nil
}
func registerBuiltinFunctions(funcMap map[string]*ast.Function, geoMatcher *geo.GeoMatcher) {
funcMap["geoip"] = &ast.Function{
Name: "geoip",
Func: func(params ...any) (any, error) {
return geoMatcher.MatchGeoIp(params[0].(string), params[1].(string)), nil
},
Types: []reflect.Type{reflect.TypeOf(geoMatcher.MatchGeoIp)},
}
funcMap["geosite"] = &ast.Function{
Name: "geosite",
Func: func(params ...any) (any, error) {
return geoMatcher.MatchGeoSite(params[0].(string), params[1].(string)), nil
},
Types: []reflect.Type{reflect.TypeOf(geoMatcher.MatchGeoSite)},
}
}
func streamInfoToExprEnv(info StreamInfo) map[string]interface{} {
m := map[string]interface{}{
"id": info.ID,
@@ -208,12 +247,12 @@ func modifiersToMap(mods []modifier.Modifier) map[string]modifier.Modifier {
return modMap
}
type depVisitor struct {
Analyzers map[string]struct{}
type idVisitor struct {
Identifiers map[string]bool
}
func (v *depVisitor) Visit(node *ast.Node) {
func (v *idVisitor) Visit(node *ast.Node) {
if idNode, ok := (*node).(*ast.IdentifierNode); ok {
v.Analyzers[idNode.Value] = struct{}{}
v.Identifiers[idNode.Value] = true
}
}

View File

@@ -92,3 +92,8 @@ type Ruleset interface {
// It must be safe for concurrent use by multiple workers.
Match(StreamInfo) (MatchResult, error)
}
type BuiltinConfig struct {
GeoSiteFilename string
GeoIpFilename string
}