目录
一个示例
实例代码模板
截图
script
scriptboty
结果
介绍
控件基础
基础手法
设置新属性
删除属性
写内容
得到内容
将流文件传输到成功关系
使用dbcp
处理处理器启动和停止
根据主表的数据,去查询子表,并且将主子表的数据进行 组装
脚本sql实现1.删除2查询3.添加
1.界面
2.配置俩个数据库
3.具体代码体
一个示例
这里顺便介绍一下 nifi 的配置数据库链接的时候要采用 左斜杠的方式写路径如(f:/jdbc-xx.jar)
实例代码模板
截图

- putfile 指定一个目录即可
script
scriptboty
def flowFile = session.create()
flowFile.write("UTF-8", "THE CharSequence to write into flow file replacing current content")
session.transfer(flowFile, REL_SUCCESS)
结果
介绍
控件基础
变量 | 类型 | 描述 |
session | org.apache.nifi.processor.ProcessSession | 用于获取,更改和传输输入文件的会话 |
context | org.apache.nifi.processor.ProcessContext | 背景(几乎没有用) |
log | org.apache.nifi.logging.ComponentLog | 此处理器实例的记录器 |
REL_SUCCESS | org.apache.nifi.processor.Relationship | 成功的关系 |
REL_FAILURE | org.apache.nifi.processor.Relationship | 失败的关系 |
CTL | java.util.HashMap <String, ControllerService> | 使用`CTL。*`处理器属性定义的控制器服务填充的映射。 `CTL。前缀属性可以链接到控制器服务,并提供从脚本访问此服务而无需其他代码。 |
SQL | java.util.HashMap <String,groovy.sql.Sql > | 用`groovy.sql.Sql`对象填充的地图连接到用`SQL。*`处理器属性定义的相应数据库。 `SQL。前缀属性只能链接到DBCPSercice。 |
Dynamic processor properties | org.apache.nifi.components.PropertyDescriptor | 所有未以`CTL。或`SQL`开头 |
基础手法
设置新属性
flowFile.ATTRIBUTE_NAME = ATTRIBUTE_VALUE
flowFile.'mime.type' = 'text/xml'
flowFile.putAttribute("ATTRIBUTE_NAME", ATTRIBUTE_VALUE)
//the same as
flowFile = session.putAttribute(flowFile, "ATTRIBUTE_NAME", ATTRIBUTE_VALUE)
删除属性
flowFile.ATTRIBUTE_NAME = null
//equals to
flowFile = session.removeAttribute(flowFile, "ATTRIBUTE_NAME")
写内容
flowFile.write("UTF-8", "THE CharSequence to write into flow file replacing current content")
flowFile.write("UTF-8"){writer->
do something with java.io.Writer...
}
flowFile.write{outStream->
do something with output stream...
}
flowFile.write{inStream, outStream->
do something with input and output streams...
}
得到内容
InputStream i = flowFile.read()
def json = new groovy.json.JsonSlurper().parse( flowFile.read() )
String text = flowFile.read().getText("UTF-8")
将流文件传输到成功关系
REL_SUCCESS << flowFile
flowFile.transfer(REL_SUCCESS)
//the same as:
session.transfer(flowFile, REL_SUCCESS)
使用dbcp
- 添加一个属性
- 添加脚本bo'd'y
- 脚本写法dbcp
import groovy.sql.Sql
//define property named `SQL.db` connected to a DBCPConnectionPool controller service
//for this case it's an H2 database example
//read value from the database with prepared statement
//and assign into flowfile attribute `db.yesterday`
def daysAdd = -1
def row = SQL.db.firstRow("select dateadd('DAY', ${daysAdd}, sysdate) as DB_DATE from dual")
flowFile.'db.yesterday' = row.DB_DATE
//to work with BLOBs and CLOBs in the database
//use parameter casting using groovy.sql.Sql.BLOB(Stream) and groovy.sql.Sql.CLOB(Reader)
//write content of the flow file into database blob
flowFile.read{ rawIn->
def parms = [
p_id : flowFile.ID as Long, //get flow file attribute named `ID`
p_data : Sql.BLOB( rawIn ), //use input stream as BLOB sql parameter
]
SQL.db.executeUpdate(parms, "update mytable set data = :p_data where id = :p_id")
}
处理处理器启动和停止
import org.apache.nifi.processor.ProcessContext
import java.util.concurrent.atomic.AtomicLong
class Const{
static Date startTime = null;
static AtomicLong triggerCount = null;
}
static onStart(ProcessContext context){
Const.startTime = new Date()
Const.triggerCount = new AtomicLong(0)
println "onStart $context ${Const.startTime}"
}
static onStop(ProcessContext context){
def alive = (System.currentTimeMillis() - Const.startTime.getTime()) / 1000
println "onStop $context executed ${ Const.triggerCount } times during ${ alive } seconds"
}
flowFile.'trigger.count' = Const.triggerCount.incrementAndGet()
REL_SUCCESS << flowFile
根据主表的数据,去查询子表,并且将主子表的数据进行 组装
//By [email protected]
import groovy.json.JsonSlurper
import groovy.sql.Sql
import groovy.json.JsonOutput
//get the flowfile
def ff = session.get()
if(!ff)return
String text = ff.read().getText("UTF-8")
def jsonSlurper = new JsonSlurper()
//get Map
def map = jsonSlurper.parseText(text)
// get attributes of this flowfile
def join_value = ff.getAttribute('join_value')
def tablename= ff.getAttribute('slaveTableName')
def tsLt= ff.getAttribute('tsLt')
def slaveJoinColumn= ff.getAttribute('slaveJoinColumn')
def slaveJsonField= ff.getAttribute('slaveJsonField')
// build the sql which select from slave table
def sql = "select distinct id from ${tablename} where ${slaveJoinColumn}= '${join_value}' and dr !='2' and ts < '${tsLt}'"
//SQL.mydb references http://docs.groovy-lang.org/2.4.10/html/api/groovy/sql/Sql.html object
List list = SQL.mydb.rows(sql .toString()) //important to cast to String
List result = []
//查询子表一定时间范围(<tsLt)的最新状态
for(int i=0;i<list.size();i++){
sql = "select * from test.fi_note_cusl_ysbxd_b where id = '${ list[i].id}' and dr !='2' order by ts DESC limit 1"
result.add( SQL.mydb.firstRow(sql .toString()))
}
map.put(slaveJsonField.toString(),result)//主子表数据json组合
def output = JsonOutput.toJson(map)
session.remove(ff)
def newff = session.create()
newff.putAttribute("data", output )
//transfer flow file to success
REL_SUCCESS << newff
脚本sql实现1.删除2查询3.添加
1.界面
2.配置俩个数据库
3.具体代码体
//sql doc :http://docs.groovy-lang.org/2.4.10/html/api/groovy/sql/Sql.html
//清空更新库
SQL.mydb2.execute("delete from cd")
//查询数据
def sql = "select a,b from ab "
List list = SQL.mydb.rows(sql .toString())
//查询子表一定时间范围(<tsLt)的最新状态
for(int i=0;i<list.size();i++){
//添加数据
SQL.mydb2.executeInsert( "insert into cd (c,d) values('${list[i].a}','${list[i].b}')")
}
//向后边的服务回报下成功了 ok
def output = "ok";
def newff = session.create()
newff.putAttribute("data", output )
REL_SUCCESS << newff
ok
这个直接执行sql 实在方便了,开心gogogo 各种玩起来
^_^ ^_^
ok