Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Small fixes (no. 2) #1538

Merged
merged 5 commits into from
May 31, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion ChangeLog
Original file line number Diff line number Diff line change
Expand Up @@ -845,7 +845,7 @@ Version 4.6.5 (3 November 2009)
* Fix: Take more care in distinguishing mass and count nouns.
* Fix: Old bug w/relative clauses: Rw+ is optional, not mandatory.
* Provide tags identifying relative, superlative adjectives.
* Remove BioLG NUMBER-AND-UNIT handling, its been superseded.
* Remove BioLG NUMBER-AND-UNIT handling, it's been superseded.
* Fix handling of parenthetical phrases/clauses.
* Fix: handling of capitalized first words ending in letter "s".
* Fix: support "filler-it" SF link for "It was reasoned that..."
Expand Down
7 changes: 4 additions & 3 deletions autogen.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,6 @@ if [ ! -f "autogen.sh" ]; then
exit 1
fi

rm -f autogen.err

run_configure=true
while [ -n "$1" ]
do
Expand All @@ -28,7 +26,8 @@ do
shift
;;
*)
break 2
echo "$0: Error: Unknown flag \"$1\"."
exit 1
esac
done

Expand All @@ -40,6 +39,8 @@ if [ $? -ne 0 ]; then
exit 1
fi

rm -f autogen.err

# If there's a config.cache file, we may need to delete it.
# If we have an existing configure script, save a copy for comparison.
# (Based on Subversion's autogen.sh, see also the deletion code below.)
Expand Down
2 changes: 1 addition & 1 deletion bindings/python-examples/tests.py
Original file line number Diff line number Diff line change
Expand Up @@ -490,7 +490,7 @@ def test_21_set_error_handler_None(self):
self.numerr = LG_Error.printall(self.error_handler_test, None)
self.assertEqual(self.numerr, self.numerr)

def test_22_defaut_handler_param(self):
def test_22_default_handler_param(self):
"""Test bad data parameter to default error handler"""
# (It should be an integer >=0 and <= lg_None.)
# Here the error handler is still set to None.
Expand Down
6 changes: 3 additions & 3 deletions configure.ac
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ AC_SUBST(HOST_OS)

# ====================================================================
# FreeBSD work-around. Apparently, the AX_PTHREAD autoconf macro
# fails to include -lstdthreads in it's results. See bug report
# fails to include -lstdthreads in its results. See bug report
# https://github.com/opencog/link-grammar/issues/1355
# So we hack, and explicitly set it here.

Expand Down Expand Up @@ -495,7 +495,7 @@ AC_SUBST(SQLITE3_CFLAGS)
# The AtomSpace dictionary backend
# TODO - use pkg_config for find the libs.
# TODO - do NOT specify -lpersist-rocks -lpersist-cog here,
# they need to be dynmically loaded instead (for proper
# they need to be dynamically loaded instead (for proper
# initialization of the shared library ctors these contain).
# Alternately, if they are found, then wrap them with
# "-Wl,--no-as-needed"
Expand Down Expand Up @@ -930,7 +930,7 @@ fi

# ===================================================================
# swig is needed for compiling the Perl and Python bindings ...
# ... well, actually, no, its not. 'make dist' is currently set up to
# ... well, actually, no, it's not. 'make dist' is currently set up to
# package all of the files generated by swig, so the user does not need
# to actually install it. However, swig is needed to create the package,
# and also needed to build from a GitHub pull.
Expand Down
2 changes: 1 addition & 1 deletion link-grammar/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Listed in rough processing order.
Version 5.3.14 - Improved error notification facility
=====================================================

This code is still "experimental", so it's API may be changed.
This code is still "experimental", so its API may be changed.

It is intended to be mostly compatible. It supports multi-threading -
all of its operations are local per-thread.
Expand Down
2 changes: 1 addition & 1 deletion link-grammar/api-structures.h
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
*
* To make the API simpler, each of these is typedef'ed as a pointer
* to a data structure. If you're not used to this, some of the code
* may look strange, since its not plain that these types are pointers.
* may look strange, since it's not plain that these types are pointers.
*
*********************************************************************/

Expand Down
4 changes: 2 additions & 2 deletions link-grammar/dict-atomese/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ same way, differing only in how the AtomSpace is managed.

Private Mode
------------
In **private mode**, the system creates and maintains it's own private
In **private mode**, the system creates and maintains its own private
AtomSpace, and relies on being able to access a `StorageNode` from
which appropriate language data can be fetched. This `StorageNode`
must be configured in the `storage.dict` file. This `StorageNode` can
Expand Down Expand Up @@ -202,7 +202,7 @@ Both modes work as follows:
will be mapped, one-to-one, to LG disjuncts, and used for parsing.
If parsing fails, and supplemental word-pairs is enabled, then
disjuncts will be decorated with additional word-pairs, hoping to
obtain a parse. If word-pairs are not avilable, then disjuncts can
obtain a parse. If word-pairs are not available, then disjuncts can
be supplemented with random `ANY` link types (which connect to any
other `ANY` connector, i.e. randomly.)
* If there are no `Sections`, or if these are disabled, then word-pairs
Expand Down
2 changes: 1 addition & 1 deletion link-grammar/dict-atomese/lookup-atomese.cc
Original file line number Diff line number Diff line change
Expand Up @@ -644,7 +644,7 @@ void and_enchain_right(Pool_desc* pool, Exp* &andhead, Exp* &andtail, Exp* item)
static void report_dict_usage(Dictionary dict)
{
// We could also print pool_num_elements_issued() but this is not
// interesting; its slightly less than pool_size().
// interesting; it's slightly less than pool_size().
logger().info("LG Dict: %lu entries; %lu Exp_pool elts; %lu MiBytes",
dict->num_entries,
pool_size(dict->Exp_pool),
Expand Down
2 changes: 1 addition & 1 deletion link-grammar/dict-atomese/sections.cc
Original file line number Diff line number Diff line change
Expand Up @@ -351,7 +351,7 @@ Dict_node * lookup_section(Dictionary dict, const Handle& germ)
Exp* andtail = nullptr;

// We really expect dn to not be null here, but ... perhaps it
// is, if its just not been observed before.
// is, if it's just not been observed before.
Exp* eee = nullptr;
if (dn)
{
Expand Down
6 changes: 3 additions & 3 deletions link-grammar/dict-atomese/word-pairs.cc
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ static Exp* make_pair_exprs(Dictionary dict, const Handle& germ)
// Get the cached link-name for this pair.
const std::string& slnk = cached_linkname(local, rawpr);

// Direction is easy to determine: its either left or right.
// Direction is easy to determine: it's either left or right.
char cdir = '+';
if (rawpr->getOutgoingAtom(1) == germ) cdir = '-';

Expand Down Expand Up @@ -446,7 +446,7 @@ static Exp* make_any_conns(Dictionary dict, Pool_desc* pool)
/// and arity is 3, then this will return `(A+ or B- or C+ or ())
/// and (A+ or B- or C+ or ()) and (A+ or B- or C+ or ())`. When
/// this is exploded into disjuncts, any combination is possible,
/// from size zero to three. That's why its a Cartesian product.
/// from size zero to three. That's why it's a Cartesian product.
///
/// FYI, this is a work-around for the lack of a commmutative
/// multi-product. What we really want to do here is to have the
Expand Down Expand Up @@ -497,7 +497,7 @@ Exp* make_cart_pairs(Dictionary dict, const Handle& germ,

Exp* optex = make_optional_node(pool, epr);

// If its 1-dimensional, we are done.
// If it's 1-dimensional, we are done.
if (1 == arity) return optex;

Exp* andhead = nullptr;
Expand Down
2 changes: 1 addition & 1 deletion link-grammar/dict-common/dict-common.h
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ struct Dictionary_s
/* Duplicate words are disallowed in 4.0.dict unless
* allow_duplicate_words is defined to "true".
* Duplicate idioms are allowed, unless the "test" parse option
* is set to "disalow-dup-idioms" (listing them for debug).
* is set to "disallow-dup-idioms" (listing them for debug).
* If these variables are 0, they get their allow/disallow values
* when the first duplicate word/idiom is encountered.
* 0: not set; 1: allow; -1: disallow */
Expand Down
2 changes: 1 addition & 1 deletion link-grammar/dict-common/dict-utils.c
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ void patch_subscript(char * s)
if (*de == '\0') return;
dp = (int) *de;

/* If it's followed by a UTF8 char, its NOT a subscript */
/* If it's followed by a UTF8 char, it's NOT a subscript */
if (127 < dp || dp < 0) return;
/* assert ((0 < dp) && (dp <= 127), "Bad dictionary entry!"); */
if (isdigit(dp)) return;
Expand Down
2 changes: 1 addition & 1 deletion link-grammar/dict-common/idiom.c
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@ void insert_idiom(Dictionary dict, Dict_node * dn)

/* ---- end of the code alluded to above ---- */

/* now its time to insert them into the dictionary */
/* now it's time to insert them into the dictionary */

dn_list = start_dn_list;

Expand Down
4 changes: 2 additions & 2 deletions link-grammar/dict-ram/dict-ram.c
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ void free_dictionary_root(Dictionary dict)
/**
* dict_order_strict - order two dictionary words in proper sort order.
* Return zero if the strings match, else return in a unique order.
* The order is NOT (locale-dependent) UTF8 sort order; its ordered
* The order is NOT (locale-dependent) UTF8 sort order; it's ordered
* based on numeric values of single bytes. This will uniquely order
* UTF8 strings, just not in a LANG-dependent (locale-dependent) order.
* But we don't need/want locale-dependent ordering!
Expand Down Expand Up @@ -309,7 +309,7 @@ Dict_node * dict_node_wild_lookup(Dictionary dict, const char *s)
* The in-RAM representation is NOT a binary tree; instead it is a tree
* of lists. An Exp node contains a tag: `AND_type`, `OR_type`, etc.
* The `operand_next` field is used to hold a linked list which is
* joined up with the given Exp type. This is more efficent to traverse,
* joined up with the given Exp type. This is more efficient to traverse,
* and also saves space, as compared to an ordinary binary tree.
*
* Example: (A or B or C or D) becomes
Expand Down
2 changes: 1 addition & 1 deletion link-grammar/dict-sql/read-sql.c
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ static const char * make_expression(Dictionary dict,
while (*p && (lg_isspace((unsigned char)*p))) p++;
if (0 == *p) return p;

/* If it's an open paren, assume its the beginning of a new list */
/* If it's an open paren, assume it's the beginning of a new list */
if ('(' == *p)
{
p = make_expression(dict, ++p, pex);
Expand Down
4 changes: 2 additions & 2 deletions link-grammar/parse/count.c
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ struct Table_tracon_s
* Each element of these arrays points to a vector, called here word-vector,
* with a size equal to abs(nearest_word - farthest_word) + 1.
*
* Each element of this vector predicts, in it's status field, whether the
* Each element of this vector predicts, in its status field, whether the
* expected count is zero for a given null-count (when the word it
* represents is the end word of the range).
* This prediction is valid for null-counts up to null_count (or for any
Expand Down Expand Up @@ -164,7 +164,7 @@ static size_t estimate_tracon_entries(Sentence sent)
}

#if HAVE_THREADS_H && !__EMSCRIPTEN__
/* Each thread will get it's own version of the `kept_table`.
/* Each thread will get its own version of the `kept_table`.
* If the program creates zillions of threads, then there will
* be a mem-leak if this table is not released when each thread
* exits. This code arranges so that `free_tls_table` is called
Expand Down
2 changes: 1 addition & 1 deletion link-grammar/parse/extract-links.c
Original file line number Diff line number Diff line change
Expand Up @@ -851,7 +851,7 @@ static void issue_links_for_choice(Linkage lkg, Parse_choice *pc,
*
* How it works:
*
* Each linkage has the abstact form of a binary tree, with left and
* Each linkage has the abstract form of a binary tree, with left and
* right subtrees. The Parse_set is an encoding for all possible
* trees. Selecting a linkage is then a matter of selecting tree from
* out of the parse-set.
Expand Down
2 changes: 1 addition & 1 deletion link-grammar/parse/fast-match.h
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
#include "link-includes.h" // for Sentence
#include "memory-pool.h"

typedef struct match_list_cache_sruct
typedef struct
{
Disjunct *d; /* disjuncts with a jet linkage */
Count_bin count; /* the counts for that linkage */
Expand Down
4 changes: 2 additions & 2 deletions link-grammar/parse/parse.c
Original file line number Diff line number Diff line change
Expand Up @@ -354,7 +354,7 @@ static int linkage_equiv_p(Linkage lpv, Linkage lnx)
// names were the same. The connector types might still differ,
// due to intersection. The multi-connector flag might differ.
// However, neither of these are likely. It is plausible to skip
// this check entirely, its mostly a CPU-time-waster that will
// this check entirely, it's mostly a CPU-time-waster that will
// never find any differences for the almost any situation.
for (uint32_t li=0; li<lpv->num_links; li++)
{
Expand Down Expand Up @@ -460,7 +460,7 @@ static void deduplicate_linkages(Sentence sent, int linkage_limit)
!sent->overflowed && (sent->num_linkages_found <= linkage_limit)))
return;

// Deduplicate the valid linkages only; its not worth wasting
// Deduplicate the valid linkages only; it's not worth wasting
// CPU time on the rest. Sorting guarantees that the valid
// linkages come first.
uint32_t nl = sent->num_valid_linkages;
Expand Down
2 changes: 1 addition & 1 deletion link-grammar/parse/preparation.c
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ static void build_sentence_disjuncts(Sentence sent, float cost_cutoff,
#ifdef DEBUG
unsigned int dcnt, ccnt;
count_disjuncts_and_connectors(sent, &dcnt, &ccnt);
lgdebug(+D_PREP, "%u disjucts, %u connectors (%zu allocated)\n",
lgdebug(+D_PREP, "%u disjuncts, %u connectors (%zu allocated)\n",
dcnt, ccnt,
pool_num_elements_issued(sent->Connector_pool) - num_con_alloced);
#endif
Expand Down
6 changes: 3 additions & 3 deletions link-grammar/parse/prune.c
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@
#define PRx(x) fprintf(stderr, ""#x)
#define PR(...) true

/* Indicator that this connector cannot be used -- that its "obsolete". */
/* Indicator that this connector cannot be used -- that it's "obsolete". */
#define BAD_WORD (MAX_SENTENCE+1)

typedef uint8_t WordIdx_m; /* Storage representation of word index */
Expand Down Expand Up @@ -1321,9 +1321,9 @@ static unsigned int cms_hash(const char *s)
return (i & (CMS_SIZE-1));
}

static void reset_last_criterion(multiset_table *cmt, const char *ctiterion)
static void reset_last_criterion(multiset_table *cmt, const char *criterion)
{
unsigned int h = cms_hash(ctiterion);
unsigned int h = cms_hash(criterion);

for (Cms *cms = cmt->cms_table[h]; cms != NULL; cms = cms->next)
cms->last_criterion = false;
Expand Down
2 changes: 1 addition & 1 deletion link-grammar/prepare/exprune.c
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ static inline bool matches_S(connector_table **ct, int w, condesc_t * c)
* doesn't match anything in the set S.
*
* If an OR or AND type expression node has one child, we can replace it
* by it's child. This, of course, is not really necessary, except for
* by its child. This, of course, is not really necessary, except for
* performance.
*/

Expand Down
12 changes: 6 additions & 6 deletions link-grammar/tokenize/tokenize.c
Original file line number Diff line number Diff line change
Expand Up @@ -1444,7 +1444,7 @@ static bool mprefix_split(Sentence sent, Gword *unsplit_word, const char *word)
mprefix_list = AFCLASS(dict->affix_table, AFDICT_MPRE);
mp_strippable = mprefix_list->length;
if (0 == mp_strippable) return false;
/* The mprefix list is revered-sorted according to prefix length.
/* The mprefix list is reversed-sorted according to prefix length.
* The code here depends on that. */
mprefix = mprefix_list->string;

Expand Down Expand Up @@ -1539,11 +1539,11 @@ static bool mprefix_split(Sentence sent, Gword *unsplit_word, const char *word)
}

/* Return true if the word might be capitalized by convention:
* -- if its the first word of a sentence
* -- if its the first word following a colon, a period, a question mark,
* -- if it's the first word of a sentence
* -- if it's the first word following a colon, a period, a question mark,
* or any bullet (For example: VII. Ancient Rome)
* -- if its the first word following an ellipsis
* -- if its the first word of a quote
* -- if it's the first word following an ellipsis
* -- if it's the first word of a quote
*
* XXX FIXME: These rules are rather English-centric. Someone should
* do something about this someday.
Expand Down Expand Up @@ -2270,7 +2270,7 @@ static void issue_r_stripped(Sentence sent,
if (NULL != r_stripped[1][i])
{
/* We are going to issue a subscripted word which is not a
* substring of it's unsplit_word. For now, the token position
* substring of its unsplit_word. For now, the token position
* computation code needs an indication for that. */
replabel = strdupa(label);
replabel[0] = REPLACEMENT_MARK[0];
Expand Down
2 changes: 1 addition & 1 deletion link-grammar/tracon-set.c
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ static tid_hash_t hash_connectors(const Connector *c, unsigned int shallow)
* cache/swap trashing if the table temporary grows very big. However, it
* had a bug, and it is not clear when to shrink the table - shrinking it
* unnecessarily can cause an overhead of a table growth. Keep for
* possible reimlementation of a similar idea.
* possible reimplementation of a similar idea.
*/
static unsigned int find_prime_for(size_t count)
{
Expand Down
4 changes: 2 additions & 2 deletions link-parser/lg_readline.c
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ static wchar_t * prompt(EditLine *el)
return wc_prompt;
}

// lg_readline()is called via a chain of functions:
// lg_readline() is called via a chain of functions:
// fget_input_string -> get_line -> get_terminal_line -> lg_readline.
// To avoid changing all of them, this variable is static for now.
// FIXME: Move the call of find_history_filepath() to lg_readline(), and
Expand Down Expand Up @@ -82,7 +82,7 @@ void find_history_filepath(const char *dictname, const char *argv0,
{
prt_error("Warning: xdg_get_home(XDG_BD_STATE) failed; "
"input history will not be supported.\n");
history_file = strdup("dev/null");
history_file = strdup("/dev/null");
}

if (get_verbosity() == D_USER_FILES)
Expand Down