-
Notifications
You must be signed in to change notification settings - Fork 224
/
apiary.apib
162 lines (134 loc) · 5.44 KB
/
apiary.apib
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
FORMAT: 1A
HOST: http://openocr.yourserver.com
# openocr
OpenOCR REST API - See http://www.openocr.net
# Group OpenOCR
If you are writing in Go, it's easier to just use the [open-ocr-client](https://github.com/tleyden/open-ocr-client) library, which uses this API.
## OCR Decode via URL [/ocr]
### OCR Decode [POST]
Decode the image given in the image url parameter and return the decoded OCR text
+ Request (application/json)
+ Body
{
"img_url":"http://bit.ly/ocrimage",
"engine":"tesseract",
"engine_args":{
"config_vars":{
"tessedit_char_whitelist":"0123456789"
},
"psm":"3"
}
}
+ Schema
{
"$schema":"http://json-schema.org/draft-04/schema#",
"title":"OpenOCR OCR Processing Request",
"description":"A request to convert an image into text",
"type":"object",
"properties":{
"img_url":{
"description":"The URL of the image to process.",
"type":"string"
},
"engine":{
"description":"The OCR engine to use",
"enum":[
"tesseract",
"go_tesseract",
"mock"
]
},
"engine_args":{
"type":"object",
"description":"The OCR engine arguments to pass (engine-specific)",
"properties":{
"config_vars":{
"type":"object",
"description":"Config vars - equivalent of -c args to tesseract"
},
"psm":{
"description":"Page Segment Mode, equivalent of -psm arg to tesseract. To use default, omit this field from the JSON.",
"enum":[
"0",
"1",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9",
"10"
]
},
"lang":{
"description":"The language to use. If omitted, will use English",
"enum":[
"eng",
"ara",
"bel",
"ben",
"bul",
"ces",
"dan",
"deu",
"ell",
"fin",
"fra",
"heb",
"hin",
"ind",
"isl",
"ita",
"jpn",
"kor",
"nld",
"nor",
"pol",
"por",
"ron",
"rus",
"spa",
"swe",
"tha",
"tur",
"ukr",
"vie",
"chi-sim",
"chi-tra"
]
}
},
"additionalProperties":false
},
"inplace_decode":{
"type":"boolean",
"description":"If true, will attempt to do ocr decode in-place rather than queuing a message on RabbitMQ for worker processing. Useful for local testing, not recommended for production."
}
},
"required":[
"img_url",
"engine"
],
"additionalProperties":false
}
+ Response 200 (text/plain)
Decoded OCR Text
## OCR Decode via file upload [/ocr-file-upload]
### OCR Decode [POST]
Decode the image data given directly in the multipart/related data. The order is important, and the first part should be the JSON, which tells it which OCR engine to use. The schema for thi JSON is documented in the `/ocr` endpoint. The `img_url` parameter of the JSON will be ignored in this case.
The image attachment should be the _second_ part, and it should work with any image content type (eg, image/png, image/jpg, etc).
+ Request (multipart/related; boundary=---BOUNDARY)
-----BOUNDARY
Content-Type: application/json
{"engine":"tesseract"}
-----BOUNDARY
-----BOUNDARY
Content-Disposition: attachment;
Content-Type: image/png
filename="attachment.txt".
PNGDATA.........
-----BOUNDARY
+ Response 200 (text/plain)
Decoded OCR Text