-
Notifications
You must be signed in to change notification settings - Fork 2
/
oafp-examples.yaml
1054 lines (1050 loc) · 65.3 KB
/
oafp-examples.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
# oafp url="https://gist.githubusercontent.com/nmaguiar/557e12e4a840d513635b3a57cb57b722/raw/oafp-examples.yaml" in=yaml out=template path=data templatepath=tmpl sql="select * where d like '%something%'"
# oafp url="https://gist.githubusercontent.com/nmaguiar/557e12e4a840d513635b3a57cb57b722/raw/oafp-examples.yaml" in=yaml path="data[].{category:c,subCategory:s,description:d}" from="sort(category,subCategory,description)" out=ctable
# verify: oafp oafp-examples.yaml path="data[?length(nvl(c, \'\')) == \`0\` || length(nvl(s, \'\')) == \`0\` || length(nvl(d, \'\')) == \`0\` || length(nvl(e, \'\')) == \`0\`]"
#
tmpl: |-
{{#each this}}
{{{$sline ($acolor 'BOLD' ($concat '📖 ' c ' | ' s)) __ 'FAINT' __ 'openBottomRect'}}}
{{{$sline ($acolor 'ITALIC' d) __ 'FAINT' __ 'doubleLine'}}}
{{{$acolor 'FAINT' e}}}
{{/each}}
data:
- c: AI
s: Evaluate
d: Given a strace summary of the execution of a command (CMD) use a LLM to describe a reason on the results obtained.
e: |-
export OAFP_MODEL="(type: openai, model: 'gpt-4o-mini', timeout: 900000, temperature: 0)"
CMD="strace --tips" && strace -c -o /tmp/result.txt $CMD ; cat /tmp/result.txt ; oafp /tmp/result.txt in=raw llmcontext="strace execution json summary" llmprompt="given the provided strace execution summary produce a table-based analysis with a small description of each system call and a bullet point conclusion summary" out=md ; rm /tmp/result.txt
# Grid / Java
- c: Grid
s: Java
d: Parses a Java hsperf data into a looping grid.
e: |-
HSPERF=/tmp/hsperfdata_user/12345 && oafp $HSPERF in=hsperf path=java out=grid grid="[[(title:Threads,type:chart,obj:'int threads.live:green:live threads.livePeak:red:peak threads.daemon:blue:daemon -min:0')|(title:Class Loaders,type:chart,obj:'int cls.loadedClasses:blue:loaded cls.unloadedClasses:red:unloaded')]|[(title:Heap,type:chart,obj:'bytes __mem.total:red:total __mem.used:blue:used -min:0')|(title:Metaspace,type:chart,obj:'bytes __mem.metaTotal:blue:total __mem.metaUsed:green:used -min:0')]]" loop=1
- c: Grid
s: Java
d: Parses a Java hsperf data + the current rss java process memory into a looping grid.
e: |-
JPID=12345 && HSPERF=/tmp/hsperfdata_openvscode-server/$JPID && oafp in=oafp data="[(file: $HSPERF, in: hsperf, path: java)|(cmd: 'ps -p $JPID -o rss=', path: '{ rss: mul(@,\`1024\`) }')]" merge=true out=grid grid="[[(title:Threads,type:chart,obj:'int threads.live:green:live threads.livePeak:red:peak threads.daemon:blue:daemon -min:0')|(title:RSS,type:chart,obj:'bytes rss:blue:rss')]|[(title:Heap,type:chart,obj:'bytes __mem.total:red:total __mem.used:blue:used -min:0')|(title:Metaspace,type:chart,obj:'bytes __mem.metaTotal:blue:total __mem.metaUsed:green:used -min:0')]]" loop=1
# Generic / Text
- c: Generic
s: Text
d: Get a json with lyrics of a song.
e: |-
curl -s https://api.lyrics.ovh/v1/Coldplay/Viva%20La%20Vida | oafp path="substring(lyrics,index_of(lyrics, '\n'),length(lyrics))"
# Generic / Excel
- c: Generic
s: Excel
d: Processes each json file in /some/data creating and updating the data.xlsx file with a sheet for each file.
e: |-
find /some/data -name "*.json" | xargs -I '{}' /bin/sh -c 'oafp file={} output=xls xlsfile=data.xlsx xlsopen=false xlssheet=$(echo {} | sed "s/.*\/\(.*\)\.json/\1/g" )'
- c: Generic
s: Excel
d: Building an Excel file with the AWS IPv4 and IPv6 ranges (1).
e: |-
curl https://ip-ranges.amazonaws.com/ip-ranges.json > ip-ranges.json
- c: Generic
s: Excel
d: Building an Excel file with the AWS IPv4 and IPv6 ranges (2).
e: |-
oafp ip-ranges.json path=prefixes out=xls xlsfile=aws-ip-ranges.xlsx xlssheet=ipv4
- c: Generic
s: Excel
d: Building an Excel file with the AWS IPv4 and IPv6 ranges (3).
e: |-
oafp ip-ranges.json path=ipv6_prefixes out=xls xlsfile=aws-ip-ranges.xlsx xlssheet=ipv6
- c: Generic
s: YAML
d: Given an YAML file with a data array composed of maps with fields 'c', 's', 'd' and 'e' filter by any record where any field doesn't have contents.
e: |-
oafp oafp-examples.yaml path="data[?length(nvl(c, \'\')) == \`0\` || length(nvl(s, \'\')) == \`0\` || length(nvl(d, \'\')) == \`0\` || length(nvl(e, \'\')) == \`0\`]"
# DB / H2
- c: DB
s: H2
d: Store the json result of a command into a H2 database table.
e: |-
oaf -c "\$o(listFilesRecursive('.'),{__format:'json'})" | oafp out=db dbjdbc="jdbc:h2:./data" dbuser=sa dbpass=sa dbtable=data
- c: DB
s: H2
d: Perform a SQL query over a H2 database.
e: |-
echo "select * from \"data\"" | oafp in=db indbjdbc="jdbc:h2:./data" indbuser=sa indbpass=sa out=ctable
- c: DB
s: H2
d: Perform queries and DML SQL statements over a H2 databases
e: |-
# Dump data into a table
oafp data="[(id: 1, val: x)|(id: 2, val: y)|(id: 3, val: z)]" in=slon out=db dbjdbc="jdbc:h2:./data" dbuser=sa dbpass=sa dbtable=data
# Copy data to another new table
ESQL='create table newdata as select * from "data"' && oafp in=db indbjdbc="jdbc:h2:./data" indbuser=sa indbpass=sa indbexec=true data="$ESQL"
# Drop the original table
ESQL='drop table "data"' && oafp in=db indbjdbc="jdbc:h2:./data" indbuser=sa indbpass=sa indbexec=true data="$ESQL"
# Output data from the new table
SQL='select * from newdata' && oafp in=db indbjdbc="jdbc:h2:./data" indbuser=sa indbpass=sa data="$SQL" out=ctable
# DB / SQLite
- c: DB
s: SQLite
d: Lists all files in openaf.jar, stores the result in a 'data' table on a SQLite data.db file and performs a query over the stored data.
e: |-
# ojob ojob.io/db/getDriver op=install db=sqlite
opack install jdbc-sqlite
# Dump data into a 'data' table
oafp in=ls data=openaf.jar out=db dbjdbc="jdbc:sqlite:data.db" dblib=sqlite dbtable="data"
# Gets stats over the dump data from the 'data' table
SQL='select count(1) numberOfFiles, round(avg("size")) avgSize, sum("size") totalSize from "data"' && oafp in=db indbjdbc="jdbc:sqlite:data.db" indblib=sqlite data="$SQL" out=ctable
- c: DB
s: SQLite
d: Store the json result on a SQLite database table.
e: |-
# ojob ojob.io/db/getDriver op=install db=sqlite
opack install jdbc-sqlite
oaf -c "\$o(listFilesRecursive('.'),{__format:'json'})" | oafp out=db dbjdbc="jdbc:sqlite:data.db" dbtable=data dblib=sqlite
- c: DB
s: SQLite
d: Perform a query over a database using JDBC.
e: |-
# ojob ojob.io/db/getDriver op=install db=sqlite
opack install jdbc-sqlite
echo "select * from data" | oafp in=db indbjdbc="jdbc:sqlite:data.db" indbtable=data indblib=sqlite out=ctable
- c: DB
s: List
d: List all OpenAF's oPack pre-prepared JDBC drivers
e: |-
oaf -c "sprint(getOPackRemoteDB())" | oafp maptoarray=true opath="[].{name:name,description:description,version:version}" sql="select * where name like 'jdbc-%' order by name" out=ctable
# Generic / Docker
- c: Docker
s: Containers
d: Output a table with the list of running containers.
e: |-
oafp cmd="docker ps --format json" input=ndjson ndjsonjoin=true path="[].{id:ID,name:Names,state:State,image:Image,networks:Networks,ports:Ports,Status:Status}" sql="select * order by networks,state,name" output=ctable
- c: Docker
s: Stats
d: Output a table with the docker stats broken down for each value.
e: |-
oafp cmd="docker stats --no-stream" in=lines linesvisual=true linesjoin=true out=ctree path="[].{containerId:\"CONTAINER ID\",pids:PIDS,name:\"NAME\",cpuPerc:\"CPU %\",memory:\"MEM USAGE / LIMIT\",memPerc:\"MEM %\",netIO:\"NET I/O\",blockIO:\"BLOCK I/O\"}|[].{containerId:containerId,pids:pids,name:name,cpuPerc:replace(cpuPerc,'%','',''),memUsage:from_bytesAbbr(split(memory,' / ')[0]),memLimit:from_bytesAbbr(split(memory,' / ')[1]),memPerc:replace(memPerc,'%','',''),netIn:from_bytesAbbr(split(netIO,' / ')[0]),netOut:from_bytesAbbr(split(netIO,' / ')[1]),blockIn:from_bytesAbbr(split(blockIO,' / ')[0]),blockOut:from_bytesAbbr(split(blockIO,' / ')[1])}" out=ctable
- c: Docker
s: Storage
d: Output a table with the docker volumes info.
e: |-
docker volume ls --format json | oafp in=ndjson ndjsonjoin=true out=ctable
- c: Docker
s: Network
d: Output a table with the docker networks info.
e: |-
docker network ls --format json | oafp in=ndjson ndjsonjoin=true out=ctable
# Mac / Brew
- c: Mac
s: Brew
d: List all the packages and corresponding versions installed in a Mac by brew.
e: |-
brew list --versions | oafp in=lines linesjoin=true path="[].split(@,' ').{package:[0],version:[1]}|sort_by(@,&package)" out=ctable
# Unix / Files
- c: Unix
s: Files
d: Converting the Unix's syslog into a json output.
e: |-
cat syslog | oafp in=raw path="split(trim(@),'\n').map(&split(@, ' ').{ date: concat([0],concat(' ',[1])), time: [2], host: [3], process: [4], message: join(' ',[5:]) }, [])"
- c: Unix
s: Files
d: Converting the Linux's /etc/os-release to SQL insert statements.
e: |-
oafp cmd="cat /etc/os-release" in=ini outkey=release path="[@]" sql="select '$HOSTNAME' \"HOST\", *" out=sql sqlnocreate=true
- c: Unix
s: Files
d: Parses the Linux /etc/passwd to a table order by uid and gid.
e: |-
oafp cmd="cat /etc/passwd" in=csv inputcsv="(withHeader: false, withDelimiter: ':')" path="[].{user:f0,pass:f1,uid:to_number(f2),gid:to_number(f3),description:f4,home:f5,shell:f6}" out=json | oafp from="notStarts(user, '#').sort(uid, gid)" out=ctable
# Unix / Generic
- c: Unix
s: Generic
d: "Creates, in unix, a data.ndjson file where each record is formatted from json files in /some/data"
e: |-
find /some/data -name "*.json" -exec oafp {} output=json \; > data.ndjson
- c: Unix
s: Compute
d: Parses the Linux /proc/cpuinfo into an array
e: |-
cat /proc/cpuinfo | sed "s/^$/---/mg" | oafp in=yaml path="[?not_null(@)]|[?type(processor)=='number']" out=ctree
- c: Unix
s: Storage
d: Parses the result of the Unix ls command
e: |-
ls -lad --time-style="+%Y-%m-%d %H:%M" * | oafp in=lines path="map(&split_re(@,'\\s+').{permissions:[0],id:[1],user:[2],group:[3],size:[4],date:[5],time:[6],file:[7]},[])" linesjoin=true out=ctable
- c: Unix
s: Network
d: Parse the result of the Linux route command
e: |-
route | sed "1d" | oafp in=lines linesjoin=true linesvisual=true linesvisualsepre="\s+" out=ctable
- c: Unix
s: Network
d: Parse the Linux 'ip tcp_metrics' command
e: |-
ip tcp_metrics | sed 's/^/target: /g' | sed 's/$/\n\n---\n/g' | sed 's/ \([a-z]*\) /\n\1: /g' | head -n -2 | oafp in=yaml path="[].{target:target,age:from_timeAbbr(replace(age,'[sec|\.]','','')),cwnd:cwnd,rtt:from_timeAbbr(rtt),rttvar:from_timeAbbr(rttvar),source:source}" sql="select * order by target" out=ctable
- c: Unix
s: Network
d: Parse the Linux 'arp' command output
e: |-
arp | oafp in=lines linesvisual=true linesjoin=true out=ctable
- c: Unix
s: Network
d: Loop over the current Linux active network connections
e: |-
oafp cmd="netstat -tun | sed \"1d\"" in=lines linesvisual=true linesjoin=true linesvisualsepre="\\s+(\\?\!Address)" out=ctable loop=1
- c: Unix
s: SystemCtl
d: Converting the Unix's systemctl list-timers
e: |-
systemctl list-timers | head -n -3 | oafp in=lines linesvisual=true linesjoin=true out=ctable
- c: Unix
s: SystemCtl
d: Converting the Unix's systemctl list-units
e: |-
systemctl list-units | head -n -6 | oafp in=lines linesvisual=true linesjoin=true path="[].delete(@,'')" out=ctable
- c: Unix
s: SystemCtl
d: Converting the Unix's systemctl list-units into an overview table
e: |-
systemctl list-units | head -n -6 | oafp in=lines linesvisual=true linesjoin=true path="[].delete(@,'')" sql="select \"LOAD\", \"ACTIVE SUB\", count(1) as \"COUNT\" group by \"LOAD\", \"ACTIVE SUB\"" sqlfilter=advanced out=ctable
- c: Unix
s: Storage
d: Converting the Unix's df output
e: |-
df --output=target,fstype,size,used,avail,pcent | tail -n +2 | oafp in=lines linesjoin=true path="[].split_re(@, ' +').{filesystem:[0],type:[1],size:[2],used:[3],available:[4],use:[5]}" out=ctable
# Unix / Debian/Ubuntu
- c: Unix
s: Debian/Ubuntu
d: List all installed packages in a Debian/Ubuntu system
e: |-
apt list --installed | sed "1d" | oafp in=lines linesjoin=true path="[].split(@,' ').{pack:split([0],'/')[0],version:[1],arch:[2]}" out=ctable
# Unix / UBI
- c: Unix
s: UBI
d: List all installed packages in an UBI system
e: |-
microdnf repoquery --setopt=cachedir=/tmp --installed | oafp in=lines linesjoin=true path="[].replace(@,'(.+)\.(\w+)\.(\w+)\$','','\$1|\$2|\$3').split(@,'|').{package:[0],dist:[1],arch:[2]}" out=ctable
# Unix / Alpine
- c: Unix
s: Alpine
d: List all installed packages in an Alpine system
e: |-
apk list -I | oafp in=lines linesjoin=true path="[].replace(@,'(.+) (.+) {(.+)} \((.+)\) \[(.+)\]','','\$1|\$2|\$3|\$4').split(@,'|').{package:[0],arch:[1],source:[2],license:[3]}" out=ctable
# Unix / RedHat
- c: Unix
s: RedHat
d: List all installed packages in a RedHat system or rpm based system (use rpm --querytags to list all fields available)
e: |-
rpm -qa --qf "%{NAME}|%{VERSION}|%{PACKAGER}|%{VENDOR}|%{ARCH}\n" | oafp in=lines linesjoin=true path="[].split(@,'|').{package:[0],version:[1],packager:[2],vendor:[3],arch:[4]}" from="sort(package)" out=ctable
# Unix / OpenSUSE
- c: Unix
s: OpenSuse
d: List all installed packages in an OpenSuse system or zypper based system
e: |-
zypper se -is | egrep "^i" | oafp in=lines linesjoin=true path="[].split(@,'|').{name:[1],version:[2],arch:[3],repo:[4]}" out=ctable
# Generic / ElasticSearch
- c: ElasticSearch
s: Cluster
d: Get an overview of an ElasticSearch/OpenSearch cluster health
e: |-
export ES_URL=http://elastic.search:9200
export ES_EXTRA="--insecure"
curl -s "$ES_URL/_cat/health?format=json" $ES_EXTRA | oafp out=ctable
- c: ElasticSearch
s: Indices
d: Get an ElasticSearch/OpenSearch indices overview
e: |-
export ES_URL=http://elastic.search:9200
export ES_EXTRA="--insecure"
curl -s "$ES_URL/_cat/indices?format=json&bytes=b" $ES_EXTRA | oafp sql="select * order by index" out=ctable
- c: ElasticSearch
s: Cluster
d: Get an ElasticSearch/OpenSearch cluster nodes overview
e: |-
export ES_URL=http://elastic.search:9200
export ES_EXTRA="--insecure"
curl -s "$ES_URL/_cat/nodes?format=json" $ES_EXTRA | oafp sql="select * order by ip" out=ctable
- c: ElasticSearch
s: Cluster
d: Get an ElasticSearch/OpenSearch cluster per host data allocation
e: |-
export ES_URL=http://elastic.search:9200
export ES_EXTRA="--insecure"
curl -s "$ES_URL/_cat/allocation?format=json&bytes=b" $ES_EXTRA | oafp sql="select * order by host" out=ctable
- c: ElasticSearch
s: Cluster
d: Get an ElasticSearch/OpenSearch cluster settings flat
e: |-
export ES_URL=http://elastic.search:9200
export ES_EXTRA="--insecure"
curl -s "$ES_URL/_cluster/settings?include_defaults=true&flat_settings=true" $ES_EXTRA | oafp out=ctree
- c: ElasticSearch
s: Cluster
d: Get an ElasticSearch/OpenSearch cluster settings non-flatted
e: |-
export ES_URL=http://elastic.search:9200
export ES_EXTRA="--insecure"
curl -s "$ES_URL/_cluster/settings?include_defaults=true" $ES_EXTRA | oafp out=ctree
- c: ElasticSearch
s: Cluster
d: Get an ElasticSearch/OpenSearch cluster stats per node
e: |-
export ES_URL=http://elastic.search:9200
export ES_EXTRA="--insecure"
curl -s "$ES_URL/_nodes/stats/indices/search" $ES_EXTRA | oafp out=ctree
- c: ElasticSearch
s: Indices
d: Get an ElasticSearch/OpenSearch settings for a specific index
e: |-
export ES_URL=http://elastic.search:9200
export ES_EXTRA="--insecure"
curl -s "$ES_URL/kibana_sample_data_flights/_settings" $ES_EXTRA | oafp out=ctree
- c: ElasticSearch
s: Indices
d: Get an ElasticSearch/OpenSearch count per index
e: |-
export ES_URL=http://elastic.search:9200
export ES_EXTRA="--insecure"
curl -s "$ES_URL/kibana_sample_data_flights/_count" $ES_EXTRA | oafp
- c: JSON Schemas
s: Lists
d: Get a list of JSON schemas from Schema Store catalog
e: |-
oafp cmd="curl https://raw.githubusercontent.com/SchemaStore/schemastore/master/src/api/json/catalog.json" path="schemas[].{name:name,description:description,files:to_string(fileMatch)}" out=ctable
- c: Kubernetes
s: Kubectl
d: List of Kubernetes pods per namespace and kind using kubectl
e: |-
oafp cmd="kubectl get pods -A -o json" path="items[].{ns:metadata.namespace,kind:metadata.ownerReferences[].kind,name:metadata.name,status:status.phase,restarts:sum(nvl(status.containerStatuses[].restartCount,from_slon('[0]'))),node:spec.nodeName,age:timeago(nvl(status.startTime,now(\`0\`)))}" sql="select * order by status,name" output=ctable
- c: Kubernetes
s: Kubectl
d: List of Kubernetes CPU, memory and storage stats per node using kubectl
e: |-
oafp cmd="kubectl get nodes -o json" path="items[].{node:metadata.name,totalCPU:status.capacity.cpu,allocCPU:status.allocatable.cpu,totalMem:to_bytesAbbr(from_bytesAbbr(status.capacity.memory)),allocMem:to_bytesAbbr(from_bytesAbbr(status.allocatable.memory)),totalStorage:to_bytesAbbr(from_bytesAbbr(status.capacity.\"ephemeral-storage\")),allocStorage:to_bytesAbbr(to_number(status.allocatable.\"ephemeral-storage\")),conditions:join(\`, \`,status.conditions[].reason)}" output=ctable
- c: Kubernetes
s: Kubectl
d: Build an output table with Kubernetes pods with namespace, pod name, container name and corresponding resources using kubectl
e: |-
kubectl get pods -A -o json | oafp path="items[].amerge({ ns: metadata.namespace, pod: metadata.name, phase: status.phase }, spec.containers[].{ container: name, resources: to_slon(resources) })[]" sql="select ns, pod, container, phase, resources order by ns, pod, container" out=ctable
- c: Kubernetes
s: Kubectl
d: Build an output table with Kubernetes pods with node, namespace, pod name, container name and corresponding resources using kubectl
e: |-
kubectl get pods -A -o json | oafp path="items[].amerge({ node: spec.nodeName, ns: metadata.namespace, pod: metadata.name, phase: status.phase }, spec.containers[].{ container: name, resources: to_slon(resources) })[]" sql="select node, ns, pod, container, phase, resources order by node, ns, pod, container" out=ctable
- c: AI
s: Ollama
d: Setting up access to Ollama and ask for data to an AI LLM model
e: |-
export OAFP_MODEL="(type: ollama, model: 'mistral', url: 'https://models.local', timeout: 900000)"
echo "Output a JSON array with 15 cities where each entry has the 'city' name, the estimated population and the corresponding 'country'" | oafp input=llm output=json > data.json
oafp data.json output=ctable sql="select * order by population desc"
- c: AI
s: OpenAI
d: Setting up the OpenAI LLM model and gather the data into a data.json file
e: |-
export OAFP_MODEL="(type: openai, model: gpt-3.5-turbo, key: ..., timeout: 900000)"
echo "list all United Nations secretaries with their corresponding 'name', their mandate 'begin date', their mandate 'end date' and their corresponding secretary 'numeral'" | oafp input=llm output=json > data.json
- c: OpenAF
s: oPacks
d: "Listing all currently accessible OpenAF's oPacks"
e: |-
oaf -c "sprint(getOPackRemoteDB())" | oafp maptoarray=true opath="[].{name:name,description:description,version:version}" from="sort(name)" out=ctable
- c: OpenAF
s: OS
d: Current OS information visible to OpenAF
e: |-
oafp -v path=os
- c: OpenAF
s: Network
d: List all network addresses returned from the current DNS server for a hostname using OpenAF
e: |-
DOMAIN=yahoo.com && oaf -c "sprint(ow.loadNet().getDNS('$DOMAIN'))" | oafp from="sort(Address)" out=ctable
- c: OpenAF
s: Network
d: List all MX (mail servers) network addresses from the current DNS server for a hostname using OpenAF
e: |-
DOMAIN=gmail.com && TYPE=MX && oaf -c "sprint(ow.loadNet().getDNS('$DOMAIN','$TYPE'))" | oafp from="sort(Address)" out=ctable
- c: OpenAF
s: Network
d: Gets all the DNS host addresses for a provided domain and ensures that the output is always a list
e: |-
DOMAIN="nattrmon.io" && oafp in=oaf data="ow.loadNet().getDNS('$DOMAIN','a',__,true)" path="if(type(@)=='array',[].Address.HostAddress,[Address.HostAddress]).map(&{ip:@},[])" out=ctable
- c: OpenAF
s: TLS
d: List the TLS certificates of a target host with a sorted alternative names using OpenAF
e: |-
DOMAIN=yahoo.com && oaf -c "sprint(ow.loadNet().getTLSCertificates('$DOMAIN',443))" | oafp path="[].{issuer:issuerDN,subject:subjectDN,notBefore:notBefore,notAfter:notAfter,alternatives:join(' | ',sort(map(&[1],nvl(alternatives,\`[]\`))))}" out=ctree
- c: OpenAF
s: Channels
d: "Copy the json result of a command into an etcd database using OpenAF's channels"
e: |-
oaf -c "\$o(io.listFiles('.').files,{__format:'json'})" | oafp out=ch ch="(type: etcd3, options: (host: localhost, port: 2379), lib: 'etcd3.js')" chkey=canonicalPath
- c: OpenAF
s: Channels
d: "Getting all data stored in an etcd database using OpenAF's channels"
e: |-
echo "" | oafp in=ch inch="(type: etcd3, options: (host: localhost, port: 2379), lib: 'etcd3.js')" out=ctable
- c: OpenAF
s: Channels
d: "Store the json results of a command into a H2 MVStore file using OpenAF's channels"
e: |-
oaf -c "\$o(listFilesRecursive('.'),{__format:'json'})" | oafp out=ch ch="(type: mvs, options: (file: data.db))" chkey=canonicalPath
- c: OpenAF
s: Channels
d: "Retrieve all keys stores in a H2 MVStore file using OpenAF's channels"
e: |-
echo "" | oafp in=ch inch="(type: mvs, options: (file: data.db))" out=ctable
- c: OpenAF
s: Channels
d: "Perform a query to a metric & label, with a start and end time, to a Prometheus server using OpenAF's channels"
e: |-
oafp in=ch inch="(type:prometheus,options:(urlQuery:'http://prometheus.local'))" inchall=true data="(start:'2024-03-22T19:00:00.000Z',end:'2024-03-22T19:05:00.000Z',step:60,query:go_memstats_alloc_bytes_total{job=\"prometheus\"})" path="[].values[].{date:to_date(mul([0],to_number('1000'))),value:[1]}" out=ctable
- c: Windows
s: Network
d: "Output a table with the list of network interfaces using Windows' PowerShell"
e: |-
Get-NetIPAddress | ConvertTo-Json | .\oafp.bat path="[].{ipAddress:IPAddress,prefixLen:PrefixLength,interface:InterfaceAlias}" sql=select\ *\ order\ by\ interface out=ctable
- c: Windows
s: Network
d: "Output a table with the current route table using Windows' PowerShell"
e: |-
Get-NetRoute | ConvertTo-Json | .\oafp.bat path="[].{destination:DestinationPrefix,gateway:NextHop,interface:InterfaceAlias,metric:InterfaceMetric}" sql=select\ *\ order\ by\ interface,destination out=ctable
- c: Windows
s: Storage
d: "Output a table with the attached disk information using Windows' PowerShell"
e: |-
Get-Disk | ConvertTo-Csv -NoTypeInformation | .\oafp.bat in=csv path="[].{id:trim(UniqueId),name:FriendlyName,isBoot:IsBoot,location:Location,size:to_bytesAbbr(to_number(Size)),allocSize:to_bytesAbbr(to_number(AllocatedSize)),sectorSize:LogicalSectorSize,phySectorSize:PhysicalSectorSize,numPartitions:NumberOfPartitions,partitioning:PartitionStyle,health:HealthStatus,bus:BusType,manufacturer:Manufacturer,model:Model,firmwareVersion:FirmwareVersion,serialNumber:SerialNumber}" out=ctable
- c: Windows
s: PnP
d: "Output a table with USB/PnP devices using Windows' PowerShell"
e: |-
Get-PnpDevice -PresentOnly | ConvertTo-Csv -NoTypeInformation | .\oafp.bat in=csv path="[].{class:PNPClass,service:Service,name:FriendlyName,id:InstanceId,description:Description,deviceId:DeviceID,status:Status,present:Present}" sql=select\ *\ order\ by\ class,service out=ctable
- c: Chart
s: Unix
d: "Output a chart with the current Unix load using uptime"
e: |-
oafp cmd="uptime" in=raw path="replace(trim(@), '.+ ([\d\.]+),? ([\d\.]+),? ([\d\.]+)\$', '', '\$1|\$2|\$3').split(@,'|')" out=chart chart="dec2 [0]:green:load -min:0" loop=1 loopcls=true
- c: APIs
s: Network
d: Generating a simple map of the current public IP address
e: |-
curl -s https://ifconfig.co/json | oafp flatmap=true out=map
- c: APIs
s: Network
d: Converting the Cloudflare DNS trace info
e: |-
curl -s https://1.1.1.1/cdn-cgi/trace | oafp in=ini out=ctree
- c: APIs
s: Network
d: Converting the Google DNS DoH query result
e: |-
DOMAIN=yahoo.com && oafp path=Answer from="sort(data)" out=ctable url="https://8.8.8.8/resolve?name=$DOMAIN&type=a"
- c: OpenAF
s: oafp
d: "List the OpenAF's oafp examples by category, sub-category and description"
e: |-
oafp url="https://ojob.io/oafp-examples.yaml" in=yaml path="data[].{category:c,subCategory:s,description:d}" from="sort(category,subCategory,description)" out=ctable
- c: OpenAF
s: oafp
d: "Filter the OpenAF's oafp examples list by a specific word in the description"
e: |-
oafp url="https://ojob.io/oafp-examples.yaml" in=yaml out=template path=data templatepath=tmpl sql="select * where d like '%something%'"
- c: AI
s: Summarize
d: "Use an AI LLM model to summarize the weather information provided in a JSON format"
e: |-
export OAFP_MODEL="(type: openai, model: 'gpt-3.5-turbo', key: '...', timeout: 900000)"
oafp url="https://wttr.in?format=j2" llmcontext="current and forecast weather" llmprompt="produce a summary of the current and forecasted weather" out=md
- c: AI
s: Prompt
d: Example of generating data from an AI LLM prompt
e: |-
export OAFP_MODEL="(type: openai, model: 'gpt-3.5-turbo', key: '...', timeout: 900000)"
oafp in=llm data="produce a list of 25 species of 'flowers' with their english and latin name and the continent where it can be found" out=json > data.json
- c: Generic
s: RSS
d: Example of generating a HTML list of titles, links and publication dates from a RSS feed
e: |-
oafp url="https://blog.google/rss" path="rss.channel.item" sql="select title, link, pubDate" output=html
- c: Mac
s: Info
d: Parses the current Mac OS overview information
e: |-
system_profiler SPSoftwareDataType -json | oafp path="SPSoftwareDataType[0]" out=ctree
- c: AWS
s: EC2
d: Given all AWS EC2 instances in an account produces a table with name, type, vpc and private ip sorted by vpc
e: |-
aws ec2 describe-instances | oafp path="Reservations[].Instances[].{name:join('',Tags[?Key=='Name'].Value),type:InstanceType,vpc:VpcId,ip:PrivateIpAddress} | sort_by(@, &vpc)" output=ctable
- c: AWS
s: EKS
d: "Builds an excel spreadsheet with all persistent volumes associated with an AWS EKS 'XYZ' with the corresponding Kubernetes namespace, pvc and pv names"
e: |-
# sudo yum install -y fontconfig
aws ec2 describe-volumes | oafp path="Volumes[?Tags[?Key=='kubernetes.io/cluster/XYZ']|[0].Value=='owned'].{VolumeId:VolumeId,Name:Tags[?Key=='Name']|[0].Value,KubeNS:Tags[?Key=='kubernetes.io/created-for/pvc/namespace']|[0].Value,KubePVC:Tags[?Key=='kubernetes.io/created-for/pvc/name']|[0].Value,KubePV:Tags[?Key=='kubernetes.io/created-for/pv/name']|[0].Value,AZ:AvailabilityZone,Size:Size,Type:VolumeType,CreateTime:CreateTime,State:State,AttachTime:join(',',nvl(Attachments[].AttachTime,from_slon('[]'))[]),InstanceId:join(',',nvl(Attachments[].InstanceId,from_slon('[]'))[])}" from="sort(KubeNS,KubePVC)" out=xls xlsfile=xyz_pvs.xlsx
- c: GitHub
s: Releases
d: "Builds a table of GitHub project releases"
e: |-
curl -s https://api.github.com/repos/openaf/openaf/releases | oafp sql="select name, tag_name, published_at order by published_at" output=ctable
- c: GitHub
s: Releases
d: "Parses the latest GitHub project release markdown notes"
e: |-
curl -s https://api.github.com/repos/openaf/openaf/releases | oafp path="[0].body" output=md
- c: OpenAF
s: Channels
d: "Store and retrieve data from a RocksDB database"
e: |-
# Install rocksdb opack: 'opack install rocksdb'
#
# Storing data
oafp cmd="oaf -c \"sprint(listFilesRecursive('/usr/bin'))\"" out=ch ch="(type: rocksdb, lib: rocksdb.js, options: (path: db))" chkey=canonicalPath
# Retrieve data
echo "" | oafp in=ch inch="(type: rocksdb, lib: rocksdb.js, options: (path: db))" out=pjson
- c: OpenAF
s: Channels
d: "Store and retrieve data from a Redis database"
e: |-
# Install rocksdb opack: 'opack install redis'
#
# Storing data
oafp cmd="oaf -c \"sprint(listFilesRecursive('/usr/bin'))\"" out=ch ch="(type: redis, lib: redis.js, options: (host: '127.0.0.1', port: 6379))" chkey=canonicalPath
# Retrieve data
echo "" | oafp in=ch inch="(type: redis, lib: redis.js, options: (host: '127.0.0.1', port: 6379))" out=pjson
- c: Generic
s: Excel
d: "Store and retrieve data from an Excel spreadsheet"
e: |-
# Storing data
oafp cmd="oaf -c \"sprint(listFilesRecursive('/usr/bin'))\"" out=xls xlsfile=data.xlsx
# Retrieve data
oafp in=xls file=data.xlsx xlscol=A xlsrow=1 out=pjson
- c: Kubernetes
s: Containers
d: "Parse the Linux cgroup cpu stats on a container running in Kubernetes"
e: |-
cat /sys/fs/cgroup/cpu.stat | sed 's/ /: /g' | oafp in=yaml out=ctree
- c: Grid
s: Kubernetes
d: "Displays a continuous updating grid with a line chart with the number of CPU throtlles and bursts recorded in the Linux cgroup cpu stats of a container running in Kubernetes and the source cpu.stats data"
e: |-
oafp cmd="cat /sys/fs/cgroup/cpu.stat | sed 's/ /: /g'" in=yaml out=grid grid="[[(title:cpu.stat,type:tree)|(title:chart,type:chart,obj:'int nr_throttled:red:throttled nr_bursts:blue:bursts -min:0 -vsize:10')]]" loop=1
- c: AWS
s: Lambda
d: Prepares a table of AWS Lambda functions with their corresponding main details
e: |-
aws lambda list-functions | oafp path="Functions[].{Name:FunctionName,Runtime:Runtime,Arch:join(',',Architectures),Role:Role,MemorySize:MemorySize,EphStore:EphemeralStorage.Size,CodeSize:CodeSize,LastModified:LastModified}" from="sort(Name)" out=ctable
- c: AI
s: Conversation
d: Send multiple requests to an OpenAI model keeping the conversation between interactions and setting the temperature parameter
e: |-
export OAFP_MODEL="(type: openai, model: 'gpt-3.5-turbo-0125', key: ..., timeout: 900000, temperature: 0)"
echo "List all countries in the european union" | oafp in=llm out=ctree llmconversation=cvst.json
echo "Add the corresponding country capital" | oafp in=llm out=ctree llmconversation=cvst.json
rm cvst.json
- c: AWS
s: Lambda
d: Prepares a table, for a specific AWS Lambda function during a specific time periods, with number of invocations and minimum, average and maximum duration per periods from AWS CloudWatch
e: |-
export _SH="aws cloudwatch get-metric-statistics --namespace AWS/Lambda --start-time 2024-03-01T00:00:00Z --end-time 2024-04-01T00:00:00Z --period 3600 --dimensions Name=FunctionName,Value=my-function"
$_SH --statistics Sum --metric-name Invocations | oafp path="Datapoints[].{ts:Timestamp,cnt:Sum}" out=ndjson > data.ndjson
$_SH --statistics Average --metric-name Duration | oafp path="Datapoints[].{ts:Timestamp,avg:Average}" out=ndjson >> data.ndjson
$_SH --statistics Minimum --metric-name Duration | oafp path="Datapoints[].{ts:Timestamp,min:Minimum}" out=ndjson >> data.ndjson
$_SH --statistics Maximum --metric-name Duration | oafp path="Datapoints[].{ts:Timestamp,max:Maximum}" out=ndjson >> data.ndjson
oafp data.ndjson ndjsonjoin=true opath="[].{ts:ts,cnt:nvl(cnt,\`0\`),min:nvl(min,\`0\`),avg:nvl(avg,\`0\`),max:nvl(max,\`0\`)}" out=json | oafp isql="select \"ts\",max(\"cnt\") \"cnt\",max(\"min\") \"min\",max(\"avg\") \"avg\",max(\"max\") \"max\" group by \"ts\" order by \"ts\"" opath="[].{ts:ts,cnt:cnt,min:from_ms(min,'(abrev:true,pad:true)'),avg:from_ms(avg,'(abrev:true,pad:true)'),max:from_ms(max,'(abrev:true,pad:true)')}" out=ctable
- c: Generic
s: Arrays
d: Converting an array of strings into an array of maps
e: |-
oafp -v path="java.params[].insert(from_json('{}'), 'param', @).insert(@, 'len', length(param))"
- c: Grid
s: Mac
d: Shows a grid with the Mac network metrics and 4 charts for in, out packets and in, out bytes
e: |-
# opack install mac
sudo powermetrics --help > /dev/null
oafp libs=Mac cmd="sudo powermetrics --format=plist --show-initial-usage -n 0 --samplers network" in=plist path=network out=grid grid="[[(title:data,path:@,xsnap:2)]|[(title:in packets,type:chart,obj:'int ipackets:blue:in')|(title:out packets,type:chart,obj:'int opackets:red:out')]|[(title:in bytes,type:chart,obj:'int ibytes:blue:in')|(title:out bytes,type:chart,obj:'int obytes:red:out')]]" loop=1
- c: Grid
s: Mac
d: Shows a grid with the Mac storage metrics and 4 charts for read, write IOPS and read, write bytes per second
e: |-
# opack install mac
sudo powermetrics --help > /dev/null
oafp libs=Mac cmd="sudo powermetrics --format=plist --show-initial-usage -n 0 --samplers disk" in=plist path=disk out=grid grid="[[(title:data,path:@,xsnap:2)]|[(title:read iops,type:chart,obj:'dec3 rops_per_s:blue:read_iops')|(title:write iops,type:chart,obj:'dec3 wops_per_s:red:write_iops')]|[(title:read bytes per sec,type:chart,obj:'bytes rbytes_per_s:blue:read_bytes_per_sec')|(title:write bytes per sec,type:chart,obj:'bytes wbytes_per_s:red:write_bytes_per_sec')]]" loop=1
- c: Generic
s: RSS
d: Builds an HTML file with the current linked news titles, publication date and source from Google News RSS.
e: |-
RSS="https://news.google.com/rss" && oafp url="$RSS" path="rss.channel.item[].{title:replace(t(@,'[{{title}}]({{link}})'),'\|','g','\\|'),date:pubDate,source:source._}" from="sort(-date)" out=mdtable | oafp in=md out=html
- c: Mac
s: Safari
d: Get a list of all Mac OS Safari bookmarks into a CSV file.
e: |-
# opack install mac
oafp ~/Library/Safari/Bookmarks.plist libs=Mac path="Children[].map(&{category:get('cat'),title:URIDictionary.title,url:URLString},setp(@,'Title','cat').nvl(Children,from_json('[]')))[][]" out=csv > bookmarks.csv
- c: Docker
s: Listing
d: List all containers with their corresponding labels parsed and sorted.
e: |-
docker ps -a --format=json | oafp in=ndjson ndjsonjoin=true out=ctree path="[].insert(@,'Labels',sort_by(split(Labels,',')[].split(@,'=').{key:[0],value:[1]},&key))" out=ctree
- c: Docker
s: Listing
d: List all containers with the docker-compose project, service name, file, id, name, image, creation time, status, networks and ports.
e: |-
docker ps -a --format=json | oafp in=ndjson ndjsonjoin=true out=ctree path="[].insert(@,'Labels',sort_by(split(Labels,',')[].split(@,'=').{key:[0],value:[1]},&key))" out=json | oafp path="[].{dcProject:nvl(Labels[?key=='com.docker.compose.project']|[0].value,''),dcService:nvl(Labels[?key=='com.docker.compose.service']|[0].value,''),ID:ID,Names:Names,Image:Image,Created:RunningFor,Status:Status,Ports:Ports,Networks:Networks,dcFile:nvl(Labels[?key=='com.docker.compose.project.config_files']|[0].value,'')}" sql="select * order by dcProject,dcService,Networks,Names" out=ctable
- c: Grid
s: Unix
d: "On an Unix/Linux system supporting 'ps' output formats %cpu and %mem, will output a chart with the percentage of cpu and memory usage of a provided pid (e.g. 12345)"
e: |-
oafp cmd="ps -p 12345 -o %cpu,%mem" in=lines linesvisual=true linesvisualsepre="\\s+" out=chart chart="int '\"%CPU\"':red:cpu '\"%MEM\"':blue:mem -min:0 -max:100" loop=1 loopcls=true
- c: Generic
s: Hex
d: Outputs an hexadecimal representation of the characters of the file provided allowing to adjust how many per line/row.
e: |-
oafp some.file in=rawhex inrawhexline=15 out=ctable
- c: Unix
s: Activity
d: "Uses the Linux command 'last' output to build a table with user, tty, from and period of activity for Debian based Linuxs"
e: |-
oafp cmd="last" in=lines linesjoin=true path="[:-3]|[?contains(@,'no logout')==\`false\`&&contains(@,'system boot')==\`false\`].split_re(@,' \\s+').{user:[0],tty:[1],from:[2],period:join(' ',[3:])}" out=ctable
- c: Unix
s: Activity
d: "Uses the Linux command 'last' output to build a table with user, tty, from and period of activity for RedHat based Linuxs"
e: |-
last | sed '/^$/d;$d;$d' | oafp in=lines linesjoin=true path="[].split_re(@, '\\s+').{user: [0], tty: [1], from: [2], login_time: join(' ', [3:7])}" out=ctable
- c: Mac
s: Activity
d: "Uses the Mac terminal command 'last' output to build an activity table with user, tty, from, login-time and logout-time"
e: |-
oafp cmd="last --libxo json" path="\"last-information\".last" out=ctable
- c: OpenAF
s: OS
d: Using OpenAF parse the current environment variables
e: |-
oaf -c "sprint(getEnvs())" | oafp sortmapkeys=true out=ctree
- c: APIs
s: iTunes
d: "Search the Apple's iTunes database for a specific term"
e: |-
TRM="Mozart" && oafp url="https://itunes.apple.com/search?term=$TRM" out=ctree
- c: APIs
s: Public Holidays
d: "Return the public holidays for a given country on a given year"
e: |-
COUNTRY=US && YEAR=2024 && oafp url="https://date.nager.at/api/v2/publicholidays/$YEAR/$COUNTRY" path="[].{date:date,localName:localName,name:name}" out=ctable
- c: Unix
s: named
d: Converts a Linux's named log, for client queries, into a CSV
e: |-
cat named.log | oafp in=lines linesjoin=true path="[?contains(@,' client ')==\`true\`].split(@,' ').{datetime:to_datef(concat([0],concat(' ',[1])),'dd-MMM-yyyy HH:mm:ss.SSS'),session:[3],sourceIP:replace([4],'(.+)#(\d+)','','\$1'),sourcePort:replace([4],'(.+)#(\d+)','','\$2'),target:replace([5],'\((.+)\):','','\$1'),query:join(' ',[6:])}" out=csv
- c: APIs
s: Space
d: How many people are in space and in which craft currently in space
e: |-
curl -s http://api.open-notify.org/astros.json | oafp path="people" sql="select \"craft\", count(1) \"people\" group by \"craft\"" output=ctable
- c: APIs
s: NASA
d: Markdown table of near Earth objects by name, magnitude, if is potentially hazardous asteroids and corresponding distance
e: |-
curl -s "https://api.nasa.gov/neo/rest/v1/feed?API_KEY=DEMO_KEY" | oafp path="near_earth_objects" maptoarray=true output=json | oafp path="[0][].{name:name,magnitude:absolute_magnitude_h,hazardous:is_potentially_hazardous_asteroid,distance:close_approach_data[0].miss_distance.kilometers}" sql="select * order by distance" output=mdtable
- c: AI
s: Mistral
d: List all available LLM models at mistral.ai
e: |-
# export OAFP_MODEL="(type: openai, model: 'llama3', url: 'https://api.mistral.ai', key: '...', timeout: 900000, temperature: 0)"
oafp in=llmmodels data="()" path="sort([].id)"
- c: AI
s: TogetherAI
d: List all available LLM models at together.ai with their corresponding type and price components order by the more expensive, for input and output
e: |-
# export OAFP_MODEL="(type: openai, model: 'meta-llama/Meta-Llama-3-70B', url: 'https://api.together.xyz', key: '...', timeout: 9000000, temperature: 0)"
oafp in=llmmodels data="()" path="[].{id:id,name:display_name,type:type,ctxLen:context_length,priceHour:pricing.hourly,priceIn:pricing.input,priceOut:pricing.output,priceBase:pricing.base,priceFineTune:pricing.finetune}" sql="select * order by priceIn desc,priceOut desc" out=ctable
- c: OpenVPN
s: List
d: "When using the container nmaguiar/openvpn it's possible to convert the list of all clients order by expiration/end date"
e: |-
oafp cmd="docker exec openvpn ovpn_listclients" in=csv path="[].{name:name,begin:to_datef(begin,'MMM dd HH:mm:ss yyyy z'),end:to_datef(end,'MMM dd HH:mm:ss yyyy z'),status:status}" out=ctable sql="select * order by end desc"
- c: Generic
s: RSS
d: "Parses the Slashdot's RSS feed news into a quick clickable HTML page in a browser"
e: |-
RSS="http://rss.slashdot.org/Slashdot/slashdot" && oafp url="$RSS" path="RDF.item[].{title:replace(t(@,'[{{title}}]({{link}})'),'\|','g','\\|'),department:department,date:date}" from="sort(-date)" out=mdtable | oafp in=md out=html
- c: Unix
s: Envs
d: Converts the Linux envs command result into a table of environment variables and corresponding values
e: |-
env | oafp in=ini path="map(&{key:@,value:to_string(get(@))},sort(keys(@)))" out=ctable
- c: Diff
s: Envs
d: Given two JSON files with environment variables performs a diff and returns a colored result with the corresponding differences
e: |-
env | oafp in=ini out=json > data1.json
# change one or more environment variables
env | oafp in=ini out=json > data2.json
oafp in=oafp data="[(file: data1.json)|(file: data2.json)]" diff="(a:'[0]',b:'[1]')" color=true
- c: Diff
s: Path
d: Given two JSON files with the parsed PATH environment variable performs a diff and returns a colored result with the corresponding differences
e: |-
env | oafp in=ini path="PATH.split(@,':')" out=json > data1.json
# export PATH=$PATH:/some/new/path
env | oafp in=ini path="PATH.split(@,':')" out=json > data2.json
oafp in=oafp data="[(file: data1.json)|(file: data2.json)]" diff="(a:'sort([0])',b:'sort([1])')" color=true
- c: AI
s: Groq
d: Calculation
e: |-
export OAFP_MODEL="(type: openai, model: 'llama3-70b-8192', key: '...', url: 'https://api.groq.com/openai', timeout: 900000, temperature: 0)"
oafp in=llm data="how much does light take to travel from Tokyo to Osaka in ms; return a 'time_in_ms' and a 'reasoning'"
- c: AI
s: Classification
d: Given a list of news titles with corresponding dates and sources add a category and sub-category field to each
e: |-
export OAFP_MODEL="(type: ollama, model: 'llama3', url: 'http://ollama.local', timeout: 900000, temperature: 0)"
# get 10 news titles
RSS="https://news.google.com/rss" && oafp url="$RSS" path="rss.channel.item[].{title:title,date:pubDate,source:source._}" from="sort(-date)" out=json sql="select * limit 5" > news.json
# add category and sub-category
oafp news.json llmcontext="list a news titles with date and source" llmprompt="keeping the provided title, date and source add a category and sub-category fields to the provided list" out=json > newsCategorized.json
oafp newsCategorized.json getlist=true out=ctable
- c: Mac
s: Info
d: Parses the current Mac OS hardware information
e: |-
system_profiler SPHardwareDataType -json | oafp path="SPHardwareDataType[0]" out=ctree
- c: OpenAF
s: oJob.io
d: Parses ojob.io/news results into a clickable news title HMTL page.
e: |-
ojob ojob.io/news/awsnews __format=json | oafp path="[].{title:replace(t(@,'[{{title}}]({{link}})'),'\|','g','\\|'),date:date}" from="sort(-date)" out=mdtable | oafp in=md out=html
- c: OpenAF
s: oJob.io
d: "Retrieves the list of oJob.io's jobs and filters which start by 'ojob.io/news' to display them in a rectangle"
e: |-
oafp url="https://ojob.io/index.json" path="sort(init.l)[].replace(@,'^https://(.+)\.(yaml|json)$','','\$1')|[?starts_with(@, 'ojob.io/news')]" out=map
- c: Unix
s: Memory map
d: "Given an Unix process will output a table with process's components memory address, size in bytes, permissions and owner"
e: |-
pmap 12345 | sed '1d;$d' | oafp in=lines linesjoin=true path="[].split_re(@, '\\s+').{address:[0],size:from_bytesAbbr([1]),perm:[2],owner:join('',[3:])}" out=ctable
- c: Generic
s: HTML
d: "Generate a HTML with table of emoticons/emojis by category, group, name, unicode and html code."
e: |-
oafp url="https://emojihub.yurace.pro/api/all" path="[].{category:category,group:group,name:name,len:length(unicode),unicode:join('<br>',unicode),htmlCode:replace(join('<br>', htmlCode),'&','g','&'),emoji:join('', htmlCode)}" out=mdtable | oafp in=md out=html
- c: Generic
s: Text
d: Search a word in the English dictionary returning phonetic, meanings, synonyms, antonyms, etc.
e: |-
WORD="google" && oafp url="https://api.dictionaryapi.dev/api/v2/entries/en/$WORD"
- c: Generic
s: Template
d: "Given a meal name will search 'The Meal DB' site for the corresponding recipe and render a markdown HTML of the corresponding recipe."
e: |-
MEAL="Pizza" && echo "# {{strMeal}}\n> {{strCategory}} | {{strArea}}\n<a href=\"{{strYoutube}}\"><img align=\"center\" width=1280 src=\"{{strMealThumb}}\"></a>\n\n## Ingredients\n\n| Ingredient | Measure |\n|---|---|\n{{#each ingredients}}|{{ingredient}}|{{measure}}|\n{{/each}}\n\n## Instructions\n\n\n\n{{{strInstructions}}}" > _template.hbs && oafp url="https://www.themealdb.com/api/json/v1/1/search.php?s=$MEAL" path="set(meals[0],'o').set(k2a(get('o'),'strIngredient','i',\`true\`),'is').set(k2a(get('o'),'strMeasure','m',\`true\`),'ms')|get('o').insert(get('o'),'ingredients',ranges(length(get('is')),\`0\`,\`1\`).map(&{ ingredient: geta('is',@).i, measure: geta('ms',@).m }, @))" out=json | oafp out=template template=_template.hbs | oafp in=md out=html htmlcompact=true
- c: Ollama
s: List models
d: Parses the list of models currently in an Ollama deployment
e: |-
export OAFP_MODEL="(type: ollama, model: 'llama3', url: 'https://models.local', timeout: 900000)"
oafp in=llmmodels data="()" out=ctable path="[].{name:name,parameters:details.parameter_size,quantization:details.quantization_level,format:details.format,family:details.family,parent:details.parent,size:size}" sql="select * order by parent,family,format,parameters,quantization"
- c: AI
s: Generate data
d: Generates synthetic data making parallel prompts to a LLM model and then cleaning up the result removing duplicate records into a final csv file.
e: |-
# Set the LLM model to use
export OAFP_MODEL="(type: openai, url: 'https://api.scaleway.ai', key: '111-222-333', model: 'llama-3.1-70b-instruct', headers: (Content-Type: application/json))"
# Run 5 parallel prompts to generate data
oafp data="()" path="range(\`5\`)" out=cmd outcmdtmpl=true outcmd="oafp data='generate a list of 10 maps each with firstName, lastName, city and country' in=llm out=ndjson" > data.ndjson
# Clean-up generate data removing duplicates
oafp data.ndjson in=ndjson ndjsonjoin=true removedups=true out=csv > data.csv
- c: AI
s: Generate data
d: Generates synthetic data using a LLM model and then uses the recorded conversation to generate more data respecting the provided instructions
e: |-
export OAFP_MODEL="(type: ollama, model: 'llama3', url: 'https://models.local', timeout: 900000)"
oafp in=llm llmconversation=conversation.json data="Generate #5 synthetic transaction record data with the following attributes: transaction id - a unique alphanumeric code; date - a date in YYYY-MM-DD format; amount - a dollar amount between 10 and 1000; description - a brief description of the transaction" getlist=true out=ndjson > data.ndjson
oafp in=llm llmconversation=conversation.json data="Generate 5 more records" getlist=true out=ndjson >> data.ndjson
oafp data.ndjson ndjsonjoin=true out=ctable sql="select * order by transaction_id"
- c: Kubernetes
s: Kubectl
d: Produces a list of pods' containers per namespace with the corresponding images and assigned nodes.
e: |-
kubectl get pods -A -o json | oafp path="items[].amerge({namespace: metadata.namespace, pod: metadata.name, nodeName: spec.nodeName},spec.containers[].{container: name, image: image, pullPolicy: imagePullPolicy})[]" sql="select namespace, pod, container, image, pullPolicy, nodeName order by namespace, pod, container" out=ctable
- c: OpenAF
s: SFTP
d: Generates a file list with filepath, size, permissions, create and last modified time from a SFTP connection with user and password
e: |-
HOST="my.server" && PORT=22 && LOGIN="user" && PASS=$"abc123" && LSPATH="." && oaf -c "sprint(\$ssh({host:'$HOST',login:'$LOGIN',pass:'$PASS'}).listFiles('$LSPATH'))" | oafp out=ctable path="[].{isDirectory:isDirectory,filepath:filepath,size:size,createTime:to_date(mul(createTime,\`1000\`)),lastModified:to_date(mul(lastModified,\`1000\`)),permissions:permissions}" from="sort(-isDirectory,filepath)"
- c: OpenAF
s: SFTP
d: Generates a file list with filepath, size, permissions, create and last modified time from a SFTP connection with user, private key and password
e: |-
HOST="my.server" && PORT=22 && PRIVID=".ssh/id_rsa" && LOGIN="user" && PASS=$"abc123" && LSPATH="." && oaf -c "sprint(\$ssh({host:'$HOST',login:'$LOGIN',pass:'$PASS',id:'$PRIVID'}).listFiles('$LSPATH'))" | oafp out=ctable path="[].{isDirectory:isDirectory,filepath:filepath,size:size,createTime:to_date(mul(createTime,\`1000\`)),lastModified:to_date(mul(lastModified,\`1000\`)),permissions:permissions}" from="sort(-isDirectory,filepath)"
- c: Kubernetes
s: Kubectl
d: Executes a recursive file list find command in a specific pod, namespace and path converting the result into a table.
e: |-
NS=default && POD=my-pod-5c9cfb87d4-r6dlp && LSPATH=/data && kubectl exec -n $NS $POD -- find $LSPATH -exec stat -c '{"t":"%F", "p": "%n", "s": %s, "m": "%Y", "e": "%A", "u": "%U", "g": "%G"}' {} \; | oafp in=ndjson ndjsonjoin=true path="[].{type:t,permissions:e,user:u,group:g,size:s,modifiedDate:to_date(mul(to_number(m),\`1000\`)),filepath:p}" from="sort(type,filepath)" out=ctable
- c: Unix
s: Files
d: Executes a recursive file list find command converting the result into a table.
e: |-
LSPATH=/openaf && find $LSPATH -exec stat -c '{"t":"%F", "p": "%n", "s": %s, "m": "%Y", "e": "%A", "u": "%U", "g": "%G"}' {} \; | oafp in=ndjson ndjsonjoin=true path="[].{type:t,permissions:e,user:u,group:g,size:s,modifiedDate:to_date(mul(to_number(m),\`1000\`)),filepath:p}" from="sort(type,filepath)" out=ctable
- c: Kubernetes
s: Kubectl
d: Given the list of all Kubernetes objects will produce a list of objects per namespace, kind, apiVersiom, creation timestamp, name and owner.
e: |-
oafp cmd="kubectl get all -A -o json" path="items[].{ns:metadata.namespace,kind:kind,apiVersion:apiVersion,createDate:metadata.creationTimestamp,name:metadata.name,owner:metadata.join(',',map(&concat(kind, concat('/', name)), nvl(ownerReferences,from_json('[]'))))}" sql="select * order by ns, kind, name" out=ctable
- c: Azure
s: Bing
d: Given an Azure Bing Search API key and a query returns the corresponding search result from Bing.
e: |-
QUERY="OpenAF" && KEY="12345" && curl -X GET "https://api.bing.microsoft.com/v7.0/search?q=$QUERY" -H "Ocp-Apim-Subscription-Key: $KEY" | oafp out=ctree
- c: OpenAF
s: Channels
d: Given a Prometheus database will query for a specific metric (go_memstats_alloc_bytes), during a defined period, every 5 seconds (step) will produce a static chart with the corresponding metric values.
e: |-
URL="http://localhost:9090" && METRIC="go_memstats_alloc_bytes" && TYPE="bytes" && LABELS="job=\"prometheus\"" && START="2024-06-18T20:00:00Z" && END="2024-06-18T20:15:00Z" && STEP=5 && echo "{query:'max($METRIC{$LABELS})',start:'$START',end:'$END',step:$STEP}" | oafp in=ch inch="(type:prometheus,options:(urlQuery:'$URL'))" inchall=true out=json | oafp path="[].set(@, 'main').map(&{metric:'$METRIC',job:get('main').metric.job,timestamp:to_date(mul([0],\`1000\`)),value:to_number([1])}, values) | []" out=schart schart="$TYPE '[].value':green:$METRIC -min:0"
- c: Generic
s: Base64
d: Encode/decode data (or text-like files) to/from gzip base64 representation for easier packing and transport.
e: |-
# encode a data file to a gzip base64 representation
oafp data.json out=gb64json > data.gb64
# decode a gzip base64 representation back into a data file
oafp data.gb64 in=gb64json out=json > data.json
- c: Generic
s: Avro
d: Reads an Avro data file as input
e: |-
# opack install avro
oafp data.avro libs=avro out=ctable
- c: Generic
s: Avro
d: Write an Avro data file as an output
e: |-
# opack install avro
oafp data.json libs=avro out=avro avrofile=data.avro
- c: Generic
s: Avro
d: "Given an Avro data file outputs it's corresponding statistics"
e: |-
# opack install avro
oafp libs=avro data.avro inavrostats=true
- c: Generic
s: Avro
d: "Given an Avro data file outputs the correspoding schema"
e: |-
# opack install avro
oafp libs=avro data.avro inavroschema=true
- c: Diff
s: Lines
d: Performing a diff between two long command lines to spot differences
e: |-
oafp diff="(a:before,b:after)" diffchars=true in=yaml color=true
# as stdin enter
before: URL="http://localhost:9090" && METRIC="go_memstats_alloc_bytes" && TYPE="bytes" && LABELS="job=\"prometheus\"" && START="2024-06-18T20:00:00Z" && END="2024-06-18T20:15:00Z" && STEP=5 && echo "{query:'max($METRIC{$LABELS})',start:'$START',end:'$END',step:$STEP}" | oafp in=ch inch="(type:prometheus,options:(urlQuery:'$URL'))" inchall=true out=json | oafp path="[].set(@, 'main').map(&{metric:'$METRIC',job:get('main').metric.job,timestamp:to_date(mul([0],\`1000\`)),value:to_number([1])}, values) | []" out=schart schart="$TYPE '[].value':green:$METRIC -min:0"
after: URL="http://localhost:9090" && METRIC="go_memstats_alloc_bytes" && TYPE="bytes" && LABELS="job=\"prometheus\"" && START="2024-06-18T20:00:00Z" && END="2024-06-18T20:15:00Z" && STEP=5 && echo "{query:'max($METRIC{$LABELS})',start:'$START',end:'$END',step:$STEP}" | oafp in=ch inch="(type:prometheus,options:(urlQuery:'$URL'))" inchall=true out=json | oafp path="[].set(@, 'main').map(&{metric:'$METRIC',job:get('main').metric.job,timestamp:to_date([0]),value:to_number([1])}, values) | []" out=schart schart="$TYPE '[].value':green:$METRIC -min:0"
#
# Ctrl^D
# as the result the difference between before and after will appear as red characters
- c: OpenAF
s: OpenVPN
d: "Using OpenAF code to perform a more complex parsing of the OpenVPN status data running on an OpenVPN container (nmaguiar/openvpn) called 'openvpn'"
e: |-
oafp in=oaf data='(function(){return(b=>{var a=b.split("\n"),c=a.indexOf("ROUTING TABLE"),d=a.indexOf("GLOBAL STATS"),f=a.indexOf("END");b=$csv().fromInString($path(a,`[2:${c}]`).join("\n")).toOutArray();var g=$csv().fromInString($path(a,`[${c+1}:${d}]`).join("\n")).toOutArray();a=$csv().fromInString($path(a,`[${d+1}:${f}]`).join("\n")).toOutArray();return{list:b.map(e=>merge(e,$from(g).equals("Common Name",e["Common Name"]).at(0))),stats:a}})($sh("docker exec openvpn cat /tmp/openvpn-status.log").get(0).stdout)})()' path="list[].{user:\"Common Name\",ip:split(\"Real Address\",':')[0],port:split(\"Real Address\",':')[1],vpnAddress:\"Virtual Address\",bytesRx:to_bytesAbbr(to_number(\"Bytes Received\")),bytesTx:to_bytesAbbr(to_number(\"Bytes Sent\")),connectedSince:to_datef(\"Connected Since\",'yyyy-MM-dd HH:mm:ss'),lastReference:to_datef(\"Last Ref\",'yyyy-MM-dd HH:mm:ss')}" sql="select * order by lastReference" out=ctable
- c: Network
s: Latency
d: "Given a host and a port will display a continuously updating line chart with network latency, in ms, between the current device and the target host and port"
e: |-
HOST=1.1.1.1 && PORT=53 && oafp in=oaf data="ow.loadNet().testPortLatency('$HOST',$PORT)" out=chart chart="int @:red:latencyMS -min:0" loop=1 loopcls=true
- c: Docker
s: Registry
d: List all the docker container image repositories of a private registry.
e: |-
# opack install DockerRegistry
# check more options with 'oafp libs=dockerregistry help=dockerregistry'
oafp libs=dockerregistry data="()" in=registryrepos inregistryurl=http://localhost:5000
- c: Docker
s: Registry
d: List all the docker container image repository tags of a private registry.
e: |-
# opack install DockerRegistry
# check more options with 'oafp libs=dockerregistry help=dockerregistry'
oafp libs=dockerregistry data="library/nginx" in=registrytags inregistryurl=http://localhost:5000
- c: Docker
s: Registry
d: List all a table of docker container images repository and corresponding tags of a private registry.
e: |-
# opack install DockerRegistry
# check more options with 'oafp libs=dockerregistry help=dockerregistry'
oafp libs=dockerregistry in=registryrepos data="()" inregistryurl=http://localhost:5000 inregistrytags=true out=ctable
- c: Kubernetes
s: PVC
d: Produces a table with all Kubernetes persistent volume claims (PVCs) in use by pods.
e: |-
oafp cmd="kubectl get pods -A -o json" path="items[].spec.set(@,'m').volumes[?persistentVolumeClaim].insert(@,'pname',get('m').containers[0].name).insert(@,'node',get('m').nodeName) | [].{pod:pname,node:node,pvc:persistentVolumeClaim.claimName}" from="sort(node,pod)" out=ctable
- c: Generic
s: List files
d: After listing all files and folders recursively producing a count table by file extension.
e: |-
FPATH="git/ojob.io" && oafp in=ls lsrecursive=true data="$FPATH" path="[].insert(@,'extension',if(isFile && index_of(filename,'.')>'0',replace(filename,'^.+\.([^\.]+)$','','\$1'),'<dir>'))" from="countBy(extension)" out=json | oafp from="sort(-_count)" out=ctable
- c: Mac
s: Tunnelblink
d: In a Mac OS with Tunnelblink, if you want to copy all your OpenVPN configurations into ovpn files.
e: |-
oafp in=ls data="$HOME/Library/Application Support/Tunnelblick/Configurations" path="[?filename=='config.ovpn'].insert(@,'name',replace(filepath,'.+\/([^\/]+)\.tblk\/.+','','\$1'))" lsrecursive=true out=cmd outcmdtmpl=true outcmd="cp \"{{filepath}}\" output/\"{{name}}.ovpn\""
- c: Network
s: ASN
d: Retrieve the list of ASN number and names from RIPE and transforms it to a CSV.
e: |-
oafp url="https://ftp.ripe.net/ripe/asnames/asn.txt" in=lines linesjoin=true path="[?length(@)>'0'].split(@,' ').{asn:[0],name:join(' ',[1:])}" out=csv
- c: Network
s: ASN
d: Retrieve an IP to ASN list list and converts it to ndjson
e: |-
oafp cmd="curl https://api.iptoasn.com/data/ip2asn-combined.tsv.gz | gunzip" in=lines linesjoin=true path="[?length(@)>'0'].split(@,'\t').{start:[0],end:[1],asn:[2],area:[3],name:[4]}" out=ndjson
- c: GPU
s: Nvidia
d: Get current Nvidia per-gpu usage
e: |-
nvidia-smi --query-gpu=index,name,memory.total,memory.used,memory.free,utilization.gpu --format=csv,nounits | oafp in=lines path="map(&trim(@),split(@,',')).join(',',@)" | oafp in=csv out=ctable
- c: GPU
s: Nvidia
d: Builds a grid with two charts providing a visualization over a Nvidia GPU usage and the corresponding memory usage for a specific GPU_IDX (gpu index)
e: |-
GPU_IDX=0 &&oafp cmd="nvidia-smi --query-gpu=index,name,memory.total,memory.used,memory.free,utilization.gpu --format=csv,nounits | oafp in=lines path=\"map(&trim(@),split(@,',')).join(',',@)\"" in=csv path="[$GPU_IDX].{memTotal:mul(to_number(\"memory.total [MiB]\"),\`1024\`),memUsed:mul(to_number(\"memory.used [MiB]\"),\`1024\`),memFree:mul(to_number(\"memory.free [MiB]\"),\`1024\`),gpuUse:to_number(\"utilization.gpu [%]\")}" out=grid grid="[[(title: usage, type: chart, obj: 'int gpuUse:green:usage -min:0 -max:100')]|[(title: memory, type: chart, obj: 'bytes memTotal:red:total memUsed:cyan:used -min:0')]]" loopcls=true loop=1
- c: DB
s: Mongo
d: List all records from a specific MongoDB database and collection from a remote Mongo database.
e: |-
# opack install mongo
oafp libs="@Mongo/mongo.js" in=ch inch="(type: mongo, options: (database: default, collection: collection, url: 'mongodb://a.server:27017'))" inchall=true path="[].delete(@,'_id')" data="()"
- c: Channels
s: S3
d: "Given a S3 bucket will save a list of data (the current list of name and versions of OpenAF's oPacks) within a provided prefix"
e: |-
# opack install s3
oafp libs="@S3/s3.js" -v path="openaf.opacks" out=ch ch="(type: s3, options: (s3url: 'https://play.min.io:9000', s3accessKey: abc123, s3secretKey: 'xyz123', s3bucket: test, s3prefix: data, multifile: true, gzip: true))" chkey=name
- c: Channels
s: S3
d: "Given a S3 bucket will load data from a previously store data in a provided prefix"
e: |-
# opack install s3
oafp libs="@S3/s3.js" in=ch inch="(type: s3, options: (s3url: 'https://play.min.io:9000', s3accessKey: abc123, s3secretKey: 'xyz123', s3bucket: test, s3prefix: data, multifile: true, gzip: true))" data="()" inchall=true out=ctable
- c: XML
s: Maven
d: Given a Maven pom.xml parses the XML content to a colored table ordering by the fields groupId and artifactId.
e: |-
oafp pom.xml path="project.dependencies.dependency" out=ctable sql="select * order by groupId, artifactId"
- c: Markdown
s: Tables
d: For an input markdown file, parse all tables, transform it to JSON and output as a colored table
e: |-
oafp url="https://raw.githubusercontent.com/OpenAF/sh/refs/heads/main/README.md" in=mdtable inmdtablejoin=true path="[0]" out=ctable
- c: AWS
s: DynamoDB
d: Given an AWS DynamoDB table 'my-table' will produce a ndjson output with all table items.
e: |-
# opack install AWS
oafp libs="@AWS/aws.js" in=ch inch="(type: dynamo, options: (region: us-west-1, tableName: my-table))" inchall=true data="__" out=ndjson
- c: AWS
s: DynamoDB
d: Given an AWS DynamoDB table 'my-users' will produce a colored tree output by getting the item for the email key 'scott.tiger@example.com'
e: |-
# opack install AWS
oafp in=ch inch="(type: dynamo, options: (region: eu-west-1, tableName: my-users))" data="(email: scott-tiger@example.com)" libs="@AWS/aws.js" out=ctree
- c: Generic
s: Set
d: Given two json files, with arrays of component versions, generate a table with the difference on one of the sides.
e: |-
oafp data="[(file: versions-latest.json)|(file: versions-build.json)]" in=oafp set="(a:'[0]',b:'[1]')" setop=diffb out=ctable
- c: Generic
s: Set
d: Given two json files, with arrays of component versions, generate a table with the union of the two sides.
e: |-
oafp data="[(file: versions-latest.json)|(file: versions-build.json)]" in=oafp set="(a:'[0]',b:'[1]')" setop=union out=ctable
- c: Mac
s: Chart
d: On a Mac OS produce a looping chart with the total percentage of current CPU usage.
e: |-
oafp cmd="top -l 1 | grep 'CPU usage' | awk '{print \$3 + \$5}'" out=chart chart="int @:blue:CPU_Usage -min:0 -max:100" loop=2 loopcls=true
- c: AI
s: Scaleway
d: Setting up a Scaleway Generative API LLM connection and ask for a list of all Roman emperors.
e: |-
OAFP_MODEL="(type: openai, url: 'https://api.scaleway.ai', key: '111-222-333', model: 'llama-3.1-70b-instruct', headers: (Content-Type: application/json))"
oafp data="list all roman emperors" in=llm getlist=true out=ctable
- c: Mac
s: Info
d: Get a list of the current logged users in Mac OS
e: |-
oafp cmd="who -aH" in=lines linesvisual=true linesjoin=true out=ctable path="[0:-1]"
- c: GitHub
s: GIST
d: Using GitHub's GIST functionality retrieves and parses an oAFp examples YAML file with the template and the corresponding data.
e: |-
opack install GIST
oafp libs="@GIST/gist.js" in=ch inch="(type: gist)" data="(id: '557e12e4a840d513635b3a57cb57b722', file: oafp-examples.yaml)" path=content | oafp in=yaml out=template templatedata=data templatepath=tmpl
- c: GIT
s: History
d: Give a GIT repository will retrieve the current log history and parse it to an Excel (XLS) file.
e: |-
opack install plugin-XLS
git log --pretty=format:'{c:"%H",a:"%an",d:"%ad",m:"%s"}' | oafp in=ndjson ndjsonjoin=true path="[].{commit:c,author:a,date:to_date(d),message:m}" out=xls outfile=data.xlsx xlsopen=false
- c: Generic
s: RSS
d: Generates a HTML page with the current news from Google News, order by date, and opens a browser with it.
e: |-