{ "':- chr_constraint'/1": { "body":":- chr_constraint(${1:Specifier})$2\n$0", "description":":- chr_constraint(+Specifier).\nEvery constraint used in CHR rules has to be declared with a chr_constraint/1 declaration by the constraint specifier. For convenience multiple constraints may be declared at once with the same chr_constraint/1 declaration followed by a comma-separated list of constraint specifiers. A constraint specifier is, in its compact form, F/A where F and A are respectively the functor name and arity of the constraint, e.g.: \n\n\n\n:- chr_constraint foo/1.\n:- chr_constraint bar/2, baz/3.\n\n In its extended form, a constraint specifier is c(A_1, ... ,A_n) where c is the constraint's functor, n its arity and the A_i are argument specifiers. An argument specifier is a mode, optionally followed by a type. Example: \n\n\n\n:- chr_constraint get_value(+,?).\n:- chr_constraint domain(?int, +list(int)),\n alldifferent(?list(int)).\n\n \n\n", "prefix":"chr_constraint" }, "':- chr_option'/2": { "body":":- chr_option(${1:Option}, ${2:Value})$3\n$0", "description":":- chr_option(+Option, +Value).\nIt is possible to specify options that apply to all the CHR rules in the module. Options are specified with the chr_option/2 declaration: \n\n:- chr_option(Option,Value).\n\n and may appear in the file anywhere after the first constraints declaration. \n\nAvailable options are: \n\ncheck_guard_bindings: This option controls whether guards should be checked for (illegal) variable bindings or not. Possible values for this option are on to enable the checks, and off to disable the checks. If this option is on, any guard fails when it binds a variable that appears in the head of the rule. When the option is off (default), the behaviour of a binding in the guard is undefined.\n\noptimize: This option controls the degree of optimization. Possible values are full to enable all available optimizations, and off (default) to disable all optimizations. The default is derived from the SWI-Prolog flag optimise, where true is mapped to full. Therefore the command line option -O provides full CHR optimization. If optimization is enabled, debugging must be disabled.\n\ndebug: This option enables or disables the possibility to debug the CHR code. Possible values are on (default) and off. See section 8.4 for more details on debugging. The default is derived from the Prolog flag generate_debug_info, which is true by default. See -nodebug. If debugging is enabled, optimization must be disabled.\n\n ", "prefix":"chr_option" }, "':- chr_type'/1": { "body":":- chr_type(${1:TypeDeclaration})$2\n$0", "description":":- chr_type(+TypeDeclaration).\nUser-defined types are algebraic data types, similar to those in Haskell or the discriminated unions in Mercury. An algebraic data type is defined using chr_type/1: \n\n:- chr_type type ---> body.\n\n If the type term is a functor of arity zero (i.e. one having zero arguments), it names a monomorphic type. Otherwise, it names a polymorphic type; the arguments of the functor must be distinct type variables. The body term is defined as a sequence of constructor definitions separated by semi-colons. \n\nEach constructor definition must be a functor whose arguments (if any) are types. Discriminated union definitions must be transparent: all type variables occurring in the body must also occur in the type. \n\nHere are some examples of algebraic data type definitions: \n\n\n\n:- chr_type color ---> red ; blue ; yellow ; green.\n\n:- chr_type tree ---> empty ; leaf(int) ; branch(tree, tree).\n\n:- chr_type list(T) ---> [] ; [T | list(T)].\n\n:- chr_type pair(T1, T2) ---> (T1 - T2).\n\n Each algebraic data type definition introduces a distinct type. Two algebraic data types that have the same bodies are considered to be distinct types (name equivalence). \n\nConstructors may be overloaded among different types: there may be any number of constructors with a given name and arity, so long as they all have different types. \n\nAliases can be defined using ==. For example, if your program uses lists of lists of integers, you can define an alias as follows: \n\n\n\n:- chr_type lli == list(list(int)).\n\n \n\n", "prefix":"chr_type" }, "':- elif'/1": { "body":":- elif(${1:Goal})$2\n$0", "description":":- elif(:Goal).\nEquivalent to :- else. :-if(Goal). ... :- endif. In a sequence as below, the section below the first matching elif is processed. If no test succeeds, the else branch is processed. \n\n:- if(test1).\nsection_1.\n:- elif(test2).\nsection_2.\n:- elif(test3).\nsection_3.\n:- else.\nsection_else.\n:- endif.\n\n ", "prefix":"elif" }, "':- else'/0": { "body":":- else$1\n$0", "description":":- else.\nStart `else' branch.", "prefix":"else" }, "':- endif'/0": { "body":":- endif$1\n$0", "description":":- endif.\nEnd of conditional compilation.", "prefix":"endif" }, "':- expects_dialect'/1": { "body":":- expects_dialect(${1:Dialect})$2\n$0", "description":":- expects_dialect(+Dialect).\nThis directive states that the code following the directive is written for the given Prolog Dialect. See also dialect. The declaration holds until the end of the file in which it appears. The current dialect is available using prolog_load_context/2. The exact behaviour of this predicate is still subject to discussion. Of course, if Dialect matches the running dialect the directive has no effect. Otherwise we check for the existence of library(dialect/Dialect) and load it if the file is found. Currently, this file has this functionality: \n\n\n\nDefine system predicates of the requested dialect we do not have. \nApply goal_expansion/2 rules that map conflicting predicates to versions emulating the requested dialect. These expansion rules reside in the dialect compatibility module, but are applied if prolog_load_context(dialect, Dialect) is active. \nModify the search path for library directories, putting libraries compatible with the target dialect before the native libraries. \nSetup support for the default filename extension of the dialect.\n\n", "prefix":"expects_dialect" }, "':- if'/1": { "body":":- if(${1:Goal})$2\n$0", "description":":- if(:Goal).\nCompile subsequent code only if Goal succeeds. For enhanced portability, Goal is processed by expand_goal/2 before execution. If an error occurs, the error is printed and processing proceeds as if Goal has failed.", "prefix":"if" }, "':- initialization'/1": { "body":":- initialization(${1:Goal})$2\n$0", "description":"[ISO]:- initialization(:Goal).\nCall Goal after loading the source file in which this directive appears has been completed. In addition, Goal is executed if a saved state created using qsave_program/1 is restored. The ISO standard only allows for using :- Term if Term is a directive. This means that arbitrary goals can only be called from a directive by means of the initialization/1 directive. SWI-Prolog does not enforce this rule. \n\nThe initialization/1 directive must be used to do program initialization in saved states (see qsave_program/1). A saved state contains the predicates, Prolog flags and operators present at the moment the state was created. Other resources (records, foreign resources, etc.) must be recreated using initialization/1 directives or from the entry goal of the saved state. \n\nUp to SWI-Prolog 5.7.11, Goal was executed immediately rather than after loading the program text in which the directive appears as dictated by the ISO standard. In many cases the exact moment of execution is irrelevant, but there are exceptions. For example, load_foreign_library/1 must be executed immediately to make the loaded foreign predicates available for exporting. SWI-Prolog now provides the directive use_foreign_library/1 to ensure immediate loading as well as loading after restoring a saved state. If the system encounters a directive :- initialization(load_foreign_library(...)), it will load the foreign library immediately and issue a warning to update your code. This behaviour can be extended by providing clauses for the multifile hook predicate prolog:initialize_now(Term, Advice), where Advice is an atom that gives advice on how to resolve the compatibility issue.\n\n", "prefix":"initialization" }, "':- module'/2": { "body":":- module(${1:Module}, ${2:PublicList})$3\n$0", "description":":- module(+Module, +PublicList).\nThis directive can only be used as the first term of a source file. It declares the file to be a module file, defining a module named Module. Note that a module name is an atom. The module exports the predicates of PublicList. PublicList is a list of predicate indicators (name/arity or name//arity pairs) or operator declarations using the format op(Precedence, Type, Name). Operators defined in the export list are available inside the module as well as to modules importing this module. See also section 4.25. Compatible to Ciao Prolog, if Module is unbound, it is unified with the basename without extension of the file being loaded.\n\n", "prefix":"module" }, "':- module'/3": { "body":":- module(${1:Module}, ${2:PublicList}, ${3:Dialect})$4\n$0", "description":":- module(+Module, +PublicList, +Dialect).\nSame as module/2. The additional Dialect argument provides a list of language options. Each atom in the list Dialect is mapped to a use_module/1 goal as given below. See also section C. The third argument is supported for compatibility with the http://prolog-commons.org/Prolog Commons project . \n\n:- use_module(library(dialect/LangOption)).\n\n \n\n", "prefix":"module" }, "':- module_transparent'/1": { "body":":- module_transparent(${1:Preds})$2\n$0", "description":":- module_transparent(+Preds).\nPreds is a comma-separated list of name/arity pairs (like dynamic/1). Each goal associated with a transparent-declared predicate will inherit the context module from its parent goal.", "prefix":"module_transparent" }, "'NAMED_PREDICATE'/3": { "body":"NAMED_PREDICATE(${1:plname}, ${2:cname}, ${3:arity})$4\n$0", "description":"NAMED_PREDICATE(plname, cname, arity).\nThis version can be used to create predicates whose name is not a valid C++ identifier. Here is a ---hypothetical--- example, which unifies the second argument with a stringified version of the first. The `cname' is used to create a name for the functions. The concrete name does not matter, but must be unique. Typically it is a descriptive name using the limitations imposed by C++ indentifiers. \n\n NAMED_PREDICATE(\"#\", hash, 2)\n { A2 = (wchar_t*)A1;\n }\n \n\n ", "prefix":"NAMED_PREDICATE" }, "'NAMED_PREDICATE_NONDET'/3": { "body":"NAMED_PREDICATE_NONDET(${1:plname}, ${2:cname}, ${3:arity})$4\n$0", "description":"NAMED_PREDICATE_NONDET(plname, cname, arity).\nDefine a non-deterministic Prolog predicate in C++. See SWI-cpp.h. FIXME: Needs cleanup and an example.", "prefix":"NAMED_PREDICATE_NONDET" }, "'PREDICATE0'/1": { "body":"PREDICATE0(${1:name})$2\n$0", "description":"PREDICATE0(name).\nThis is the same as PREDICATE(name, 0). It avoids a compiler warning about that PL_av is not used.", "prefix":"PREDICATE0" }, "'Xserver':ensure_x_server/2": { "body": ["ensure_x_server(${1:Display}, ${2:Depth})$3\n$0" ], "description":" ensure_x_server(+Display, +Depth)\n\n Ensure the existence of a graphics environment for XPCE. This\n library uses the `head-less' server Xvfb if there is no X-server\n in the environment.\n\n Currently this library deals with Windows (where no explicit\n server is required) and Xfree using the Xfree socket naming\n conventions. Please send platform-specific code to\n info@swi-prolog.org", "prefix":"ensure_x_server" }, "(div)/2": { "body":"div(${1:IntExpr1}, ${2:IntExpr2})$3\n$0", "description":"[ISO]div(+IntExpr1, +IntExpr2).\nInteger division, defined as Result is (IntExpr1 - IntExpr1 mod IntExpr2) // IntExpr2. In other words, this is integer division that rounds towards -infinity. This function guarantees behaviour that is consistent with mod/2, i.e., the following holds for every pair of integers X,Y where Y =\\= 0. \n\n Q is div(X, Y),\n M is mod(X, Y),\n X =:= Y*Q+M.\n\n ", "prefix":"div" }, "(initialization)/2": { "body":"initialization(${1:Goal}, ${2:When})$3\n$0", "description":"initialization(:Goal, +When).\nSimilar to initialization/1, but allows for specifying when Goal is executed while loading the program text: now: Execute Goal immediately.\n\nafter_load: Execute Goal after loading program text. This is the same as initialization/1.\n\nrestore: Do not execute Goal while loading the program, but only when restoring a state.\n\n ", "prefix":"initialization" }, "(thread_initialization)/1": { "body":"thread_initialization(${1:Goal})$2\n$0", "description":"thread_initialization(:Goal).\nRun Goal when thread is started. This predicate is similar to initialization/1, but is intended for initialization operations of the runtime stacks, such as setting global variables as described in section 4.33. Goal is run on four occasions: at the call to this predicate, after loading a saved state, on starting a new thread and on creating a Prolog engine through the C interface. On loading a saved state, Goal is executed after running the initialization/1 hooks.", "prefix":"thread_initialization" }, "abolish/1": { "body":"abolish(${1:PredicateIndicator})$2\n$0", "description":"[ISO]abolish(:PredicateIndicator).\nRemoves all clauses of a predicate with functor Functor and arity Arity from the database. All predicate attributes (dynamic, multifile, index, etc.) are reset to their defaults. Abolishing an imported predicate only removes the import link; the predicate will keep its old definition in its definition module. According to the ISO standard, abolish/1 can only be applied to dynamic procedures. This is odd, as for dealing with dynamic procedures there is already retract/1 and retractall/1. The abolish/1 predicate was introduced in DEC-10 Prolog precisely for dealing with static procedures. In SWI-Prolog, abolish/1 works on static procedures, unless the Prolog flag iso is set to true. \n\nIt is advised to use retractall/1 for erasing all clauses of a dynamic predicate.\n\n", "prefix":"abolish" }, "abolish/2": { "body":"abolish(${1:Name}, ${2:Arity})$3\n$0", "description":"abolish(+Name, +Arity).\nSame as abolish(Name/Arity). The predicate abolish/2 conforms to the Edinburgh standard, while abolish/1 is ISO compliant.", "prefix":"abolish" }, "abort/0": { "body":"abort$1\n$0", "description":"abort.\nAbort the Prolog execution and restart the top level. If the -t toplevel command line option is given, this goal is started instead of entering the default interactive top level. Aborting is implemented by throwing the reserved exception '$aborted'. This exception can be caught using catch/3, but the recovery goal is wrapped with a predicate that prunes the choice points of the recovery goal (i.e., as once/1) and re-throws the exception. This is illustrated in the example below, where we press control-C and `a'. \n\n\n\n?- catch((repeat,fail), E, true).\n^CAction (h for help) ? abort\n% Execution Aborted\n\n ", "prefix":"abort" }, "abs/1": { "body":"abs(${1:Expr})$2\n$0", "description":"[ISO]abs(+Expr).\nEvaluate Expr and return the absolute value of it.", "prefix":"abs" }, "absolute_file_name/2": { "body":"absolute_file_name(${1:File}, ${2:Absolute})$3\n$0", "description":"absolute_file_name(+File, -Absolute).\nExpand a local filename into an absolute path. The absolute path is canonicalised: references to . and .. are deleted. This predicate ensures that expanding a filename returns the same absolute path regardless of how the file is addressed. SWI-Prolog uses absolute filenames to register source files independent of the current working directory. See also absolute_file_name/3. See also absolute_file_name/3 and expand_file_name/2.", "prefix":"absolute_file_name" }, "absolute_file_name/3": { "body":"absolute_file_name(${1:Spec}, ${2:Absolute}, ${3:Options})$4\n$0", "description":"absolute_file_name(+Spec, -Absolute, +Options).\nConvert the given file specification into an absolute path. Spec is a term Alias(Relative) (e.g., (library(lists)), a relative filename or an absolute filename. The primary intention of this predicate is to resolve files specified as Alias(Relative). Option is a list of options to guide the conversion: extensions(ListOfExtensions): List of file extensions to try. Default is ''. For each extension, absolute_file_name/3 will first add the extension and then verify the conditions imposed by the other options. If the condition fails, the next extension on the list is tried. Extensions may be specified both as .ext or plain ext.\n\nrelative_to(+FileOrDir): Resolve the path relative to the given directory or the directory holding the given file. Without this option, paths are resolved relative to the working directory (see working_directory/2) or, if Spec is atomic and absolute_file_name/[2,3] is executed in a directive, it uses the current source file as reference.\n\naccess(Mode): Imposes the condition access_file(File, Mode). Mode is one of read, write, append, execute, exist or none. See also access_file/2.\n\nfile_type(Type): Defines extensions. Current mapping: txt implies [''], prolog implies ['.pl', ''], executable implies ['.so', ''], qlf implies ['.qlf', ''] and directory implies ['']. The file type source is an alias for prolog for compatibility with SICStus Prolog. See also prolog_file_type/2. This predicate only returns non-directories, unless the option file_type(directory) is specified.\n\nfile_errors(fail/error): If error (default), throw an existence_error exception if the file cannot be found. If fail, stay silent.128Silent operation was the default up to version 3.2.6.\n\nsolutions(first/all): If first (default), the predicate leaves no choice point. Otherwise a choice point will be left and backtracking may yield more solutions.\n\nexpand(true/false): If true (default is false) and Spec is atomic, call expand_file_name/2 followed by member/2 on Spec before proceeding. This is a SWI-Prolog extension.\n\n The Prolog flag verbose_file_search can be set to true to help debugging Prolog's search for files. \n\nThis predicate is derived from Quintus Prolog. In Quintus Prolog, the argument order was absolute_file_name(+Spec, +Options, -Path). The argument order has been changed for compatibility with ISO and SICStus. The Quintus argument order is still accepted.\n\n", "prefix":"absolute_file_name" }, "access_file/2": { "body":"access_file(${1:File}, ${2:Mode})$3\n$0", "description":"access_file(+File, +Mode).\nTrue if File exists and can be accessed by this Prolog process under mode Mode. Mode is one of the atoms read, write, append, exist, none or execute. File may also be the name of a directory. Fails silently otherwise. access_file(File, none) simply succeeds without testing anything. If Mode is write or append, this predicate also succeeds if the file does not exist and the user has write access to the directory of the specified location. \n\nThe bahaviour is backed up by the POSIX access() API. The Windows replacement (_waccess()) returns incorrect results because it does not consider ACLs (Access Control Lists). The Prolog flag win_file_access_check may be used to control the level of checking performed by Prolog. Please note that checking access never provides a guarantee that a subsequent open succeeds without errors due to inherent concurrency in file operations. It is generally more robust to try and open the file and handle possible exceptions. See open/4 and catch/3.\n\n", "prefix":"access_file" }, "acos/1": { "body":"acos(${1:Expr})$2\n$0", "description":"[ISO]acos(+Expr).\nResult = arccos(Expr). Result is the angle in radians.", "prefix":"acos" }, "acosh/1": { "body":"acosh(${1:Expr})$2\n$0", "description":"acosh(+Expr).\nResult = arccosh(Expr) (inverse hyperbolic cosine).", "prefix":"acosh" }, "acyclic_term/1": { "body":"acyclic_term(${1:Term})$2\n$0", "description":"[ISO]acyclic_term(@Term).\nTrue if Term does not contain cycles, i.e. can be processed recursively in finite time. See also cyclic_term/1 and section 2.16.", "prefix":"acyclic_term" }, "add_import_module/3": { "body":"add_import_module(${1:Module}, ${2:Import}, ${3:StartOrEnd})$4\n$0", "description":"add_import_module(+Module, +Import, +StartOrEnd).\nIf Import is not already an import module for Module, add it to this list at the start or end depending on StartOrEnd. See also import_module/2 and delete_import_module/2.", "prefix":"add_import_module" }, "aggregate:aggregate/3": { "body":"aggregate(${1:Template}, ${2:Goal}, ${3:Result})$4\n$0", "description":"[nondet]aggregate(+Template, :Goal, -Result).\nAggregate bindings in Goal according to Template. The aggregate/3 version performs bagof/3 on Goal.", "prefix":"aggregate" }, "aggregate:aggregate/4": { "body":"aggregate(${1:Template}, ${2:Discriminator}, ${3:Goal}, ${4:Result})$5\n$0", "description":"[nondet]aggregate(+Template, +Discriminator, :Goal, -Result).\nAggregate bindings in Goal according to Template. The aggregate/4 version performs setof/3 on Goal.", "prefix":"aggregate" }, "aggregate:aggregate_all/3": { "body":"aggregate_all(${1:Template}, ${2:Goal}, ${3:Result})$4\n$0", "description":"[semidet]aggregate_all(+Template, :Goal, -Result).\nAggregate bindings in Goal according to Template. The aggregate_all/3 version performs findall/3 on Goal. Note that this predicate fails if Template contains one or more of min(X), max(X), min(X,Witness) or max(X,Witness) and Goal has no solutions, i.e., the minumum and maximum of an empty set is undefined.", "prefix":"aggregate_all" }, "aggregate:aggregate_all/4": { "body":"aggregate_all(${1:Template}, ${2:Discriminator}, ${3:Goal}, ${4:Result})$5\n$0", "description":"[semidet]aggregate_all(+Template, +Discriminator, :Goal, -Result).\nAggregate bindings in Goal according to Template. The aggregate_all/4 version performs findall/3 followed by sort/2 on Goal. See aggregate_all/3 to understand why this predicate can fail.", "prefix":"aggregate_all" }, "aggregate:foreach/2": { "body":"foreach(${1:Generator}, ${2:Goal})$3\n$0", "description":"foreach(:Generator, :Goal).\nTrue if conjunction of results is true. Unlike forall/2, which runs a failure-driven loop that proves Goal for each solution of Generator, foreach/2 creates a conjunction. Each member of the conjunction is a copy of Goal, where the variables it shares with Generator are filled with the values from the corresponding solution. The implementation executes forall/2 if Goal does not contain any variables that are not shared with Generator. \n\nHere is an example: \n\n\n\n?- foreach(between(1,4,X), dif(X,Y)), Y = 5.\nY = 5.\n?- foreach(between(1,4,X), dif(X,Y)), Y = 3.\nfalse.\n\n bug: Goal is copied repeatedly, which may cause problems if attributed variables are involved.\n\n ", "prefix":"foreach" }, "aggregate:free_variables/4": { "body":"free_variables(${1:Generator}, ${2:Template}, ${3:VarList0}, ${4:VarList})$5\n$0", "description":"[det]free_variables(:Generator, +Template, +VarList0, -VarList).\nFind free variables in bagof/setof template. In order to handle variables properly, we have to find all the universally quantified variables in the Generator. All variables as yet unbound are universally quantified, unless \n\nthey occur in the template\nthey are bound by X^P, setof/3, or bagof/3\n\n free_variables(Generator, Template, OldList, NewList) finds this set using OldList as an accumulator. \n\nauthor: - Richard O'Keefe - Jan Wielemaker (made some SWI-Prolog enhancements)\n\nlicense: Public domain (from DEC10 library).\n\nTo be done: - Distinguish between control-structures and data terms. - Exploit our built-in term_variables/2 at some places?\n\n ", "prefix":"free_variables" }, "ansi_term:ansi_format/3": { "body": ["ansi_format(${1:Attributes}, ${2:Format}, ${3:Args})$4\n$0" ], "description":" ansi_format(+Attributes, +Format, +Args) is det.\n\n Format text with ANSI attributes. This predicate behaves as\n format/2 using Format and Args, but if the =current_output= is a\n terminal, it adds ANSI escape sequences according to Attributes.\n For example, to print a text in bold cyan, do\n\n ==\n ?- ansi_format([bold,fg(cyan)], 'Hello ~w', [world]).\n ==\n\n Attributes is either a single attribute or a list thereof. The\n attribute names are derived from the ANSI specification. See the\n source for sgr_code/2 for details. Some commonly used attributes\n are:\n\n - bold\n - underline\n - fg(Color), bg(Color), hfg(Color), hbg(Color)\n\n Defined color constants are below. =default= can be used to\n access the default color of the terminal.\n\n - black, red, green, yellow, blue, magenta, cyan, white\n\n ANSI sequences are sent if and only if\n\n - The =current_output= has the property tty(true) (see\n stream_property/2).\n - The Prolog flag =color_term= is =true=.", "prefix":"ansi_format" }, "append/1": { "body":"append(${1:File})$2\n$0", "description":"append(+File).\nSimilar to tell/1, but positions the file pointer at the end of File rather than truncating an existing file. The pipe construct is not accepted by this predicate.", "prefix":"append" }, "apply/2": { "body":"apply(${1:Goal}, ${2:List})$3\n$0", "description":"apply(:Goal, +List).\nAppend the members of List to the arguments of Goal and call the resulting term. For example: apply(plus(1), [2, X]) calls plus(1, 2, X). New code should use call/[2..] if the length of List is fixed.", "prefix":"apply" }, "apply:exclude/3": { "body":"exclude(${1:Goal}, ${2:List1}, ${3:List2})$4\n$0", "description":"[det]exclude(:Goal, +List1, ?List2).\nFilter elements for which Goal fails. True if List2 contains those elements Xi of List1 for which call(Goal, Xi) fails.", "prefix":"exclude" }, "apply:foldl/4": { "body":"foldl(${1:Goal}, ${2:List}, ${3:V0}, ${4:V})$5\n$0", "description":"foldl(:Goal, +List, +V0, -V).\n", "prefix":"foldl" }, "apply:foldl/5": { "body":"foldl(${1:Goal}, ${2:List1}, ${3:List2}, ${4:V0}, ${5:V})$6\n$0", "description":"foldl(:Goal, +List1, +List2, +V0, -V).\n", "prefix":"foldl" }, "apply:foldl/6": { "body":"foldl(${1:Goal}, ${2:List1}, ${3:List2}, ${4:List3}, ${5:V0}, ${6:V})$7\n$0", "description":"foldl(:Goal, +List1, +List2, +List3, +V0, -V).\n", "prefix":"foldl" }, "apply:foldl/7": { "body":"foldl(${1:Goal}, ${2:List1}, ${3:List2}, ${4:List3}, ${5:List4}, ${6:V0}, ${7:V})$8\n$0", "description":"foldl(:Goal, +List1, +List2, +List3, +List4, +V0, -V).\nFold a list, using arguments of the list as left argument. The foldl family of predicates is defined by: \n\nfoldl(P, [X11,...,X1n], ..., [Xm1,...,Xmn], V0, Vn) :-\n P(X11, ..., Xm1, V0, V1),\n ...\n P(X1n, ..., Xmn, V', Vn).\n\n ", "prefix":"foldl" }, "apply:include/3": { "body":"include(${1:Goal}, ${2:List1}, ${3:List2})$4\n$0", "description":"[det]include(:Goal, +List1, ?List2).\nFilter elements for which Goal succeeds. True if List2 contains those elements Xi of List1 for which call(Goal, Xi) succeeds. See also: Older versions of SWI-Prolog had sublist/3 with the same arguments and semantics.\n\n ", "prefix":"include" }, "apply:maplist/2": { "body":"maplist(${1:Goal}, ${2:List})$3\n$0", "description":"maplist(:Goal, ?List).\nTrue if Goal can successfully be applied on all elements of List. Arguments are reordered to gain performance as well as to make the predicate deterministic under normal circumstances.", "prefix":"maplist" }, "apply:maplist/3": { "body":"maplist(${1:Goal}, ${2:List1}, ${3:List2})$4\n$0", "description":"maplist(:Goal, ?List1, ?List2).\nAs maplist/2, operating on pairs of elements from two lists.", "prefix":"maplist" }, "apply:maplist/4": { "body":"maplist(${1:Goal}, ${2:List1}, ${3:List2}, ${4:List3})$5\n$0", "description":"maplist(:Goal, ?List1, ?List2, ?List3).\nAs maplist/2, operating on triples of elements from three lists.", "prefix":"maplist" }, "apply:maplist/5": { "body":"maplist(${1:Goal}, ${2:List1}, ${3:List2}, ${4:List3}, ${5:List4})$6\n$0", "description":"maplist(:Goal, ?List1, ?List2, ?List3, ?List4).\nAs maplist/2, operating on quadruples of elements from four lists.", "prefix":"maplist" }, "apply:partition/4": { "body":"partition(${1:Pred}, ${2:List}, ${3:Included}, ${4:Excluded})$5\n$0", "description":"[det]partition(:Pred, +List, ?Included, ?Excluded).\nFilter elements of List according to Pred. True if Included contains all elements for which call(Pred, X) succeeds and Excluded contains the remaining elements.", "prefix":"partition" }, "apply:partition/5": { "body":"partition(${1:Pred}, ${2:List}, ${3:Less}, ${4:Equal}, ${5:Greater})$6\n$0", "description":"[semidet]partition(:Pred, +List, ?Less, ?Equal, ?Greater).\nFilter List according to Pred in three sets. For each element Xi of List, its destination is determined by call(Pred, Xi, Place), where Place must be unified to one of <, = or >. Pred must be deterministic.", "prefix":"partition" }, "apply:scanl/4": { "body":"scanl(${1:Goal}, ${2:List}, ${3:V0}, ${4:Values})$5\n$0", "description":"scanl(:Goal, +List, +V0, -Values).\n", "prefix":"scanl" }, "apply:scanl/5": { "body":"scanl(${1:Goal}, ${2:List1}, ${3:List2}, ${4:V0}, ${5:Values})$6\n$0", "description":"scanl(:Goal, +List1, +List2, +V0, -Values).\n", "prefix":"scanl" }, "apply:scanl/6": { "body":"scanl(${1:Goal}, ${2:List1}, ${3:List2}, ${4:List3}, ${5:V0}, ${6:Values})$7\n$0", "description":"scanl(:Goal, +List1, +List2, +List3, +V0, -Values).\n", "prefix":"scanl" }, "apply:scanl/7": { "body":"scanl(${1:Goal}, ${2:List1}, ${3:List2}, ${4:List3}, ${5:List4}, ${6:V0}, ${7:Values})$8\n$0", "description":"scanl(:Goal, +List1, +List2, +List3, +List4, +V0, -Values).\nLeft scan of list. The scanl family of higher order list operations is defined by: \n\nscanl(P, [X11,...,X1n], ..., [Xm1,...,Xmn], V0,\n [V0,V1,...,Vn]) :-\n P(X11, ..., Xm1, V0, V1),\n ...\n P(X1n, ..., Xmn, V', Vn).\n\n \n\n", "prefix":"scanl" }, "apply_macros:expand_phrase/2": { "body": ["expand_phrase(${1:PhraseGoal}, ${2:Goal})$3\n$0" ], "description":" expand_phrase(+PhraseGoal, -Goal) is semidet.\n expand_phrase(+PhraseGoal, +Pos0, -Goal, -Pos) is semidet.\n\n Provide goal-expansion for PhraseGoal. PhraseGoal is either\n phrase/2,3 or call_dcg/2,3. The current version does not\n translate control structures, but only simple terminals and\n non-terminals.\n\n For example:\n\n ==\n ?- expand_phrase(phrase((\"ab\", rule)), List), Goal).\n Goal = (List=[97, 98|_G121], rule(_G121, [])).\n ==\n\n @throws Re-throws errors from dcg_translate_rule/2", "prefix":"expand_phrase" }, "apply_macros:expand_phrase/4": { "body": [ "expand_phrase(${1:PhraseGoal}, ${2:Pos0}, ${3:Goal}, ${4:Pos})$5\n$0" ], "description":" expand_phrase(+PhraseGoal, -Goal) is semidet.\n expand_phrase(+PhraseGoal, +Pos0, -Goal, -Pos) is semidet.\n\n Provide goal-expansion for PhraseGoal. PhraseGoal is either\n phrase/2,3 or call_dcg/2,3. The current version does not\n translate control structures, but only simple terminals and\n non-terminals.\n\n For example:\n\n ==\n ?- expand_phrase(phrase((\"ab\", rule)), List), Goal).\n Goal = (List=[97, 98|_G121], rule(_G121, [])).\n ==\n\n @throws Re-throws errors from dcg_translate_rule/2", "prefix":"expand_phrase" }, "apropos/1": { "body":"apropos(${1:Pattern})$2\n$0", "description":"apropos(+Pattern).\nDisplay all predicates, functions and sections that have Pattern in their name or summary description. Lowercase letters in Pattern also match a corresponding uppercase letter. Example: ?- apropos(file). Display predicates, functions and sections that have `file' (or `File', etc.) in their summary description. ", "prefix":"apropos" }, "archive:archive_close/1": { "body":"archive_close(${1:Archive})$2\n$0", "description":"[det]archive_close(+Archive).\nClose the archive. If close_parent(true) is specified, the underlying stream is closed too. If there is an entry opened with archive_open_entry/2, actually closing the archive is delayed until the stream associated with the entry is closed. This can be used to open a stream to an archive entry without having to worry about closing the archive: \n\narchive_open_named(ArchiveFile, EntryName, Stream) :-\n archive_open(ArchiveFile, Handle, []),\n archive_next_header(Handle, Name),\n archive_open_entry(Handle, Stream),\n archive_close(Archive).\n\n ", "prefix":"archive_close" }, "archive:archive_create/3": { "body":"archive_create(${1:OutputFile}, ${2:InputFiles}, ${3:Options})$4\n$0", "description":"[det]archive_create(+OutputFile, +InputFiles, +Options).\nConvenience predicate to create an archive in OutputFile with data from a list of InputFiles and the given Options. Besides options supported by archive_open/4, the following options are supported: \n\ndirectory(+Directory): Changes the directory before adding input files. If this is specified, paths of input files must be relative to Directory and archived files will not have Directory as leading path. This is to simulate -C option of the tar program.\n\nformat(+Format): Write mode supports the following formats: `7zip, cpio, gnutar, iso9660, xar and zip`. Note that a particular installation may support only a subset of these, depending on the configuration of libarchive.\n\n ", "prefix":"archive_create" }, "archive:archive_data_stream/3": { "body":"archive_data_stream(${1:Archive}, ${2:DataStream}, ${3:Options})$4\n$0", "description":"[nondet]archive_data_stream(+Archive, -DataStream, +Options).\nTrue when DataStream is a stream to a data object inside Archive. This predicate transparently unpacks data inside possibly nested archives, e.g., a tar file inside a zip file. It applies the appropriate decompression filters and thus ensures that Prolog reads the plain data from DataStream. DataStream must be closed after the content has been processed. Backtracking opens the next member of the (nested) archive. This predicate processes the following options: meta_data(-Data:list(dict)): If provided, Data is unified with a list of filters applied to the (nested) archive to open the current DataStream. The first element describes the outermost archive. Each Data dict contains the header properties (archive_header_property/2) as well as the keys: filters(Filters:list(atom))Filter list as obtained from archive_property/2name(Atom)Name of the entry. \n\n Note that this predicate can handle a non-archive files as a pseudo archive holding a single stream by using archive_open/3 with the options [format(all), format(raw)].\n\n", "prefix":"archive_data_stream" }, "archive:archive_entries/2": { "body":"archive_entries(${1:Archive}, ${2:Paths})$3\n$0", "description":"[det]archive_entries(+Archive, -Paths).\nTrue when Paths is a list of pathnames appearing in Archive.", "prefix":"archive_entries" }, "archive:archive_extract/3": { "body":"archive_extract(${1:ArchiveFile}, ${2:Dir}, ${3:Options})$4\n$0", "description":"archive_extract(+ArchiveFile, +Dir, +Options).\nExtract files from the given archive into Dir. Supported options: remove_prefix(+Prefix): Strip Prefix from all entries before extracting\n\n Errors: - existence_error(directory, Dir) if Dir does not exist or is not a directory. - domain_error(path_prefix(Prefix), Path) if a path in the archive does not start with Prefix\n\nTo be done: Add options\n\n ", "prefix":"archive_extract" }, "archive:archive_header_property/2": { "body":"archive_header_property(${1:Archive}, ${2:Property})$3\n$0", "description":"archive_header_property(+Archive, ?Property).\nTrue when Property is a property of the current header. Defined properties are: filetype(-Type): Type is one of file, link, socket, character_device, block_device, directory or fifo. It appears that this library can also return other values. These are returned as an integer.\n\nmtime(-Time): True when entry was last modified at time.\n\nsize(-Bytes): True when entry is Bytes long.\n\nlink_target(-Target): Target for a link. Currently only supported for symbolic links.\n\nformat(-Format): Provides the name of the archive format applicable to the current entry. The returned value is the lowercase version of the output of archive_format_name().\n\npermissions(-Integer): True when entry has the indicated permission mask.\n\n ", "prefix":"archive_header_property" }, "archive:archive_next_header/2": { "body":"archive_next_header(${1:Handle}, ${2:Name})$3\n$0", "description":"[semidet]archive_next_header(+Handle, -Name).\nForward to the next entry of the archive for which Name unifies with the pathname of the entry. Fails silently if the name of the archive is reached before success. Name is typically specified if a single entry must be accessed and unbound otherwise. The following example opens a Prolog stream to a given archive entry. Note that Stream must be closed using close/1 and the archive must be closed using archive_close/1 after the data has been used. See also setup_call_cleanup/3. \n\nopen_archive_entry(ArchiveFile, Entry, Stream) :-\n open(ArchiveFile, read, In, [type(binary)]),\n archive_open(In, Archive, [close_parent(true)]),\n archive_next_header(Archive, Entry),\n archive_open_entry(Archive, Stream).\n\n Errors: permission_error(next_header, archive, Handle) if a previously opened entry is not closed.\n\n ", "prefix":"archive_next_header" }, "archive:archive_open/3": { "body": ["archive_open(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"archive_open('Param1','Param2','Param3')", "prefix":"archive_open" }, "archive:archive_open/4": { "body":"archive_open(${1:Data}, ${2:Mode}, ${3:Archive}, ${4:Options})$5\n$0", "description":"[det]archive_open(+Data, +Mode, -Archive, +Options).\nOpen the archive in Data and unify Archive with a handle to the opened archive. Data is either a file or a stream that contains a valid archive. Details are controlled by Options. Typically, the option close_parent(true) is used to close stream if the archive is closed using archive_close/1. For other options, the defaults are typically fine. The option format(raw) must be used to process compressed streams that do not contain explicit entries (e.g., gzip'ed data) unambibuously. The raw format creates a pseudo archive holding a single member named data. close_parent(+Boolean): If this option is true (default false), Stream is closed if archive_close/1 is called on Archive.\n\ncompression(+Compression): Synomym for filter(Compression). Deprecated.\n\nfilter(+Filter): Support the indicated filter. This option may be used multiple times to support multiple filters. In read mode, If no filter options are provided, all is assumed. In write mode, none is assumed. Supported values are all, bzip2, compress, gzip, grzip, lrzip, lzip, lzma, lzop, none, rpm, uu and xz. The value all is default for read, none for write.\n\nformat(+Format): Support the indicated format. This option may be used multiple times to support multiple formats in read mode. In write mode, you must supply a single format. If no format options are provided, all is assumed for read mode. Note that all does not include raw. To open both archive and non-archive files, both format(all) and format(raw) must be specified. Supported values are: all, 7zip, ar, cab, cpio, empty, gnutar, iso9660, lha, mtree, rar, raw, tar, xar and zip. The value all is default for read.\n\n Note that the actually supported compression types and formats may vary depending on the version and installation options of the underlying libarchive library. This predicate raises a domain error if the (explicitly) requested format is not supported. \n\nErrors: - domain_error(filter, Filter) if the requested filter is not supported. - domain_error(format, Format) if the requested format type is not supported.\n\n ", "prefix":"archive_open" }, "archive:archive_open_entry/2": { "body":"archive_open_entry(${1:Archive}, ${2:Stream})$3\n$0", "description":"[det]archive_open_entry(+Archive, -Stream).\nOpen the current entry as a stream. Stream must be closed. If the stream is not closed before the next call to archive_next_header/2, a permission error is raised.", "prefix":"archive_open_entry" }, "archive:archive_property/2": { "body":"archive_property(${1:Handle}, ${2:Property})$3\n$0", "description":"[nondet]archive_property(+Handle, ?Property).\nTrue when Property is a property of the archive Handle. Defined properties are: filters(List): True when the indicated filters are applied before reaching the archive format.\n\n ", "prefix":"archive_property" }, "archive:archive_set_header_property/2": { "body":"archive_set_header_property(${1:Archive}, ${2:Property})$3\n$0", "description":"archive_set_header_property(+Archive, +Property).\nSet Property of the current header. Write-mode only. Defined properties are: filetype(-Type): Type is one of file, link, socket, character_device, block_device, directory or fifo. It appears that this library can also return other values. These are returned as an integer.\n\nmtime(-Time): True when entry was last modified at time.\n\nsize(-Bytes): True when entry is Bytes long.\n\nlink_target(-Target): Target for a link. Currently only supported for symbolic links.\n\n ", "prefix":"archive_set_header_property" }, "area:side_pattern/3": { "body": ["side_pattern(${1:SideA}, ${2:SideB}, ${3:Pattern})$4\n$0" ], "description":" side_pattern(+SideA, +SideB, -Pattern)\n\n Pattern is the bit if SideA on area A corresponds to SideB on area B.", "prefix":"side_pattern" }, "arg/3": { "body":"arg(${1:Arg}, ${2:Term}, ${3:Value})$4\n$0", "description":"[ISO]arg(?Arg, +Term, ?Value).\nTerm should be instantiated to a term, Arg to an integer between 1 and the arity of Term. Value is unified with the Arg-th argument of Term. Arg may also be unbound. In this case Value will be unified with the successive arguments of the term. On successful unification, Arg is unified with the argument number. Backtracking yields alternative solutions.91The instantiation pattern (-, +, ?) is an extension to `standard' Prolog. Some systems provide genarg/3 that covers this pattern. The predicate arg/3 fails silently if Arg = 0 or Arg > arity and raises the exception domain_error(not_less_than_zero, Arg) if Arg < 0.", "prefix":"arg" }, "arithmetic:arithmetic_expression_value/2": { "body": ["arithmetic_expression_value(${1:Expression}, ${2:Result})$3\n$0" ], "description":" arithmetic_expression_value(:Expression, -Result) is det.\n\n True when Result unifies with the arithmetic result of\n evaluating Expression.", "prefix":"arithmetic_expression_value" }, "arithmetic:arithmetic_function/1": { "body": ["arithmetic_function(${1:NameArity})$2\n$0" ], "description":" arithmetic_function(:NameArity) is det.\n\n Declare a predicate as an arithmetic function.\n\n @deprecated This function provides a partial work around for\n pure Prolog user-defined arithmetic functions that has been\n dropped in SWI-Prolog 5.11.23. Notably, it only deals with\n expression know at compile time.", "prefix":"arithmetic_function" }, "asin/1": { "body":"asin(${1:Expr})$2\n$0", "description":"[ISO]asin(+Expr).\nResult = arcsin(Expr). Result is the angle in radians.", "prefix":"asin" }, "asinh/1": { "body":"asinh(${1:Expr})$2\n$0", "description":"asinh(+Expr).\nResult = arcsinh(Expr) (inverse hyperbolic sine).", "prefix":"asinh" }, "assert/1": { "body":"assert(${1:Term})$2\n$0", "description":"assert(+Term).\nEquivalent to assertz/1. Deprecated: new code should use assertz/1.", "prefix":"assert" }, "assert/2": { "body":"assert(${1:Term}, ${2:Reference})$3\n$0", "description":"assert(+Term, -Reference).\nEquivalent to assertz/2. Deprecated: new code should use assertz/2.", "prefix":"assert" }, "asserta/1": { "body":"asserta(${1:Term})$2\n$0", "description":"[ISO]asserta(+Term).\nAssert a fact or clause in the database. Term is asserted as the first fact or clause of the corresponding predicate. Equivalent to assert/1, but Term is asserted as first clause or fact of the predicate. If the program space for the target module is limited (see set_module/1), asserta/1 can raise a resource_error(program_space).", "prefix":"asserta" }, "asserta/2": { "body":"asserta(${1:Term}, ${2:Reference})$3\n$0", "description":"asserta(+Term, -Reference).\nAsserts a clause as asserta/1 and unifies Reference with a handle to this clause. The handle can be used to access this specific clause using clause/3 and erase/1.", "prefix":"asserta" }, "assertz/1": { "body":"assertz(${1:Term})$2\n$0", "description":"[ISO]assertz(+Term).\nEquivalent to asserta/1, but Term is asserted as the last clause or fact of the predicate.", "prefix":"assertz" }, "assertz/2": { "body":"assertz(${1:Term}, ${2:Reference})$3\n$0", "description":"assertz(+Term, -Reference).\nEquivalent to asserta/1, asserting the new clause as the last clause of the predicate.", "prefix":"assertz" }, "assoc:assoc_to_keys/2": { "body":"assoc_to_keys(${1:Assoc}, ${2:Keys})$3\n$0", "description":"[det]assoc_to_keys(+Assoc, -Keys).\nTrue if Keys is the list of keys in Assoc. The keys are sorted in ascending order.", "prefix":"assoc_to_keys" }, "assoc:assoc_to_list/2": { "body":"assoc_to_list(${1:Assoc}, ${2:Pairs})$3\n$0", "description":"[det]assoc_to_list(+Assoc, -Pairs).\nTranslate Assoc to a list Pairs of Key-Value pairs. The keys in Pairs are sorted in ascending order.", "prefix":"assoc_to_list" }, "assoc:assoc_to_values/2": { "body":"assoc_to_values(${1:Assoc}, ${2:Values})$3\n$0", "description":"[det]assoc_to_values(+Assoc, -Values).\nTrue if Values is the list of values in Assoc. Values are ordered in ascending order of the key to which they were associated. Values may contain duplicates.", "prefix":"assoc_to_values" }, "assoc:del_assoc/4": { "body":"del_assoc(${1:Key}, ${2:Assoc0}, ${3:Value}, ${4:Assoc})$5\n$0", "description":"[semidet]del_assoc(+Key, +Assoc0, ?Value, -Assoc).\nTrue if Key-Value is in Assoc0. Assoc is Assoc0 with Key-Value removed.", "prefix":"del_assoc" }, "assoc:del_max_assoc/4": { "body":"del_max_assoc(${1:Assoc0}, ${2:Key}, ${3:Val}, ${4:Assoc})$5\n$0", "description":"[semidet]del_max_assoc(+Assoc0, ?Key, ?Val, -Assoc).\nTrue if Key-Value is in Assoc0 and Key is the greatest key. Assoc is Assoc0 with Key-Value removed. Warning: This will succeed with no bindings for Key or Val if Assoc0 is empty.", "prefix":"del_max_assoc" }, "assoc:del_min_assoc/4": { "body":"del_min_assoc(${1:Assoc0}, ${2:Key}, ${3:Val}, ${4:Assoc})$5\n$0", "description":"[semidet]del_min_assoc(+Assoc0, ?Key, ?Val, -Assoc).\nTrue if Key-Value is in Assoc0 and Key is the smallest key. Assoc is Assoc0 with Key-Value removed. Warning: This will succeed with no bindings for Key or Val if Assoc0 is empty.", "prefix":"del_min_assoc" }, "assoc:empty_assoc/1": { "body":"empty_assoc(${1:Assoc})$2\n$0", "description":"[semidet]empty_assoc(?Assoc).\nIs true if Assoc is the empty association list.", "prefix":"empty_assoc" }, "assoc:gen_assoc/3": { "body":"gen_assoc(${1:Key}, ${2:Assoc}, ${3:Value})$4\n$0", "description":"[nondet]gen_assoc(?Key, +Assoc, ?Value).\nTrue if Key-Value is an association in Assoc. Enumerates keys in ascending order on backtracking. See also: get_assoc/3.\n\n ", "prefix":"gen_assoc" }, "assoc:get_assoc/3": { "body":"get_assoc(${1:Key}, ${2:Assoc}, ${3:Value})$4\n$0", "description":"[semidet]get_assoc(+Key, +Assoc, -Value).\nTrue if Key-Value is an association in Assoc. Errors: type_error(assoc, Assoc) if Assoc is not an association list.\n\n ", "prefix":"get_assoc" }, "assoc:get_assoc/5": { "body":"get_assoc(${1:Key}, ${2:Assoc0}, ${3:Val0}, ${4:Assoc}, ${5:Val})$6\n$0", "description":"[semidet]get_assoc(+Key, +Assoc0, ?Val0, ?Assoc, ?Val).\nTrue if Key-Val0 is in Assoc0 and Key-Val is in Assoc.", "prefix":"get_assoc" }, "assoc:is_assoc/1": { "body":"is_assoc(${1:Assoc})$2\n$0", "description":"[semidet]is_assoc(+Assoc).\nTrue if Assoc is an association list. This predicate checks that the structure is valid, elements are in order, and tree is balanced to the extent guaranteed by AVL trees. I.e., branches of each subtree differ in depth by at most 1.", "prefix":"is_assoc" }, "assoc:list_to_assoc/2": { "body":"list_to_assoc(${1:Pairs}, ${2:Assoc})$3\n$0", "description":"[det]list_to_assoc(+Pairs, -Assoc).\nCreate an association from a list Pairs of Key-Value pairs. List must not contain duplicate keys. Errors: domain_error(unique_key_pairs, List) if List contains duplicate keys\n\n ", "prefix":"list_to_assoc" }, "assoc:map_assoc/2": { "body":"map_assoc(${1:Pred}, ${2:Assoc})$3\n$0", "description":"[semidet]map_assoc(:Pred, +Assoc).\nTrue if Pred(Value) is true for all values in Assoc.", "prefix":"map_assoc" }, "assoc:map_assoc/3": { "body":"map_assoc(${1:Pred}, ${2:Assoc0}, ${3:Assoc})$4\n$0", "description":"[semidet]map_assoc(:Pred, +Assoc0, ?Assoc).\nMap corresponding values. True if Assoc is Assoc0 with Pred applied to all corresponding pairs of of values.", "prefix":"map_assoc" }, "assoc:max_assoc/3": { "body":"max_assoc(${1:Assoc}, ${2:Key}, ${3:Value})$4\n$0", "description":"[semidet]max_assoc(+Assoc, -Key, -Value).\nTrue if Key-Value is in Assoc and Key is the largest key.", "prefix":"max_assoc" }, "assoc:min_assoc/3": { "body":"min_assoc(${1:Assoc}, ${2:Key}, ${3:Value})$4\n$0", "description":"[semidet]min_assoc(+Assoc, -Key, -Value).\nTrue if Key-Value is in assoc and Key is the smallest key.", "prefix":"min_assoc" }, "assoc:ord_list_to_assoc/2": { "body":"ord_list_to_assoc(${1:Pairs}, ${2:Assoc})$3\n$0", "description":"[det]ord_list_to_assoc(+Pairs, -Assoc).\nAssoc is created from an ordered list Pairs of Key-Value pairs. The pairs must occur in strictly ascending order of their keys. Errors: domain_error(key_ordered_pairs, List) if pairs are not ordered.\n\n ", "prefix":"ord_list_to_assoc" }, "assoc:put_assoc/4": { "body":"put_assoc(${1:Key}, ${2:Assoc0}, ${3:Value}, ${4:Assoc})$5\n$0", "description":"[det]put_assoc(+Key, +Assoc0, +Value, -Assoc).\nAssoc is Assoc0, except that Key is associated with Value. This can be used to insert and change associations.", "prefix":"put_assoc" }, "at_end_of_stream/1": { "body":"at_end_of_stream(${1:Stream})$2\n$0", "description":"[ISO]at_end_of_stream(+Stream).\nSucceeds after the last character of the named stream is read, or Stream is not a valid input stream. The end-of-stream test is only available on buffered input streams (unbuffered input streams are rarely used; see open/4).", "prefix":"at_end_of_stream" }, "at_halt/1": { "body":"at_halt(${1:Goal})$2\n$0", "description":"at_halt(:Goal).\nRegister Goal to be run from PL_cleanup(), which is called when the system halts. The hooks are run in the reverse order they were registered (FIFO). Success or failure executing a hook is ignored. If the hook raises an exception this is printed using print_message/2. An attempt to call halt/[0,1] from a hook is ignored. Hooks may call cancel_halt/1, causing halt/0 and PL_halt(0) to print a message indicating that halting the system has been cancelled.", "prefix":"at_halt" }, "atan/1": { "body":"atan(${1:Expr})$2\n$0", "description":"[ISO]atan(+Expr).\nResult = arctan(Expr). Result is the angle in radians.", "prefix":"atan" }, "atan/2": { "body":"atan(${1:YExpr}, ${2:XExpr})$3\n$0", "description":"atan(+YExpr, +XExpr).\nSame as atan2/2 (backward compatibility).", "prefix":"atan" }, "atan2/2": { "body":"atan2(${1:YExpr}, ${2:XExpr})$3\n$0", "description":"[ISO]atan2(+YExpr, +XExpr).\nResult = arctan(YExpr/XExpr). Result is the angle in radians. The return value is in the range [- pi ... pi ]. Used to convert between rectangular and polar coordinate system. Note that the ISO Prolog standard demands atan2(0.0,0.0) to raise an evaluation error, whereas the C99 and POSIX standards demand this to evaluate to 0.0. SWI-Prolog follows C99 and POSIX.\n\n", "prefix":"atan2" }, "atanh/1": { "body":"atanh(${1:Expr})$2\n$0", "description":"atanh(+Expr).\nResult = arctanh(Expr). (inverse hyperbolic tangent).", "prefix":"atanh" }, "atom/1": { "body":"atom(${1:Term})$2\n$0", "description":"[ISO]atom(@Term).\nTrue if Term is bound to an atom.", "prefix":"atom" }, "atom_chars/2": { "body":"atom_chars(${1:Atom}, ${2:CharList})$3\n$0", "description":"[ISO]atom_chars(?Atom, ?CharList).\nAs atom_codes/2, but CharList is a list of one-character atoms rather than a list of character codes.94Up to version 3.2.x, atom_chars/2 behaved as the current atom_codes/2. The current definition is compliant with the ISO standard. \n\n?- atom_chars(hello, X).\n\nX = [h, e, l, l, o]\n\n ", "prefix":"atom_chars" }, "atom_codes/2": { "body":"atom_codes(${1:Atom}, ${2:String})$3\n$0", "description":"[ISO]atom_codes(?Atom, ?String).\nConvert between an atom and a list of character codes. If Atom is instantiated, it will be translated into a list of character codes and the result is unified with String. If Atom is unbound and String is a list of character codes, Atom will be unified with an atom constructed from this list.", "prefix":"atom_codes" }, "atom_concat/3": { "body":"atom_concat(${1:Atom1}, ${2:Atom2}, ${3:Atom3})$4\n$0", "description":"[ISO]atom_concat(?Atom1, ?Atom2, ?Atom3).\nAtom3 forms the concatenation of Atom1 and Atom2. At least two of the arguments must be instantiated to atoms. This predicate also allows for the mode (-,-,+), non-deterministically splitting the 3rd argument into two parts (as append/3 does for lists). SWI-Prolog allows for atomic arguments. Portable code must use atomic_concat/3 if non-atom arguments are involved.", "prefix":"atom_concat" }, "atom_length/2": { "body":"atom_length(${1:Atom}, ${2:Length})$3\n$0", "description":"[ISO]atom_length(+Atom, -Length).\nTrue if Atom is an atom of Length characters. The SWI-Prolog version accepts all atomic types, as well as code-lists and character-lists. New code should avoid this feature and use write_length/3 to get the number of characters that would be written if the argument was handed to write_term/3.", "prefix":"atom_length" }, "atom_number/2": { "body":"atom_number(${1:Atom}, ${2:Number})$3\n$0", "description":"atom_number(?Atom, ?Number).\nRealises the popular combination of atom_codes/2 and number_codes/2 to convert between atom and number (integer or float) in one predicate, avoiding the intermediate list. Unlike the ISO number_codes/2 predicates, atom_number/2 fails silently in mode (+,-) if Atom does not represent a number.97Versions prior to 6.1.7 raise a syntax error, compliant to number_codes/2 See also atomic_list_concat/2 for assembling an atom from atoms and numbers.", "prefix":"atom_number" }, "atom_prefix/2": { "body":"atom_prefix(${1:Atom}, ${2:Prefix})$3\n$0", "description":"[deprecated]atom_prefix(+Atom, +Prefix).\nTrue if Atom starts with the characters from Prefix. Its behaviour is equivalent to ?- sub_atom(Atom, 0, _, _, Prefix). Deprecated.", "prefix":"atom_prefix" }, "atom_string/2": { "body":"atom_string(${1:Atom}, ${2:String})$3\n$0", "description":"atom_string(?Atom, ?String).\nBi-directional conversion between an atom and a string. At least one of the two arguments must be instantiated. Atom can also be an integer or floating point number.", "prefix":"atom_string" }, "atom_to_stem_list/2": { "body":"atom_to_stem_list(${1:In}, ${2:ListOfStems})$3\n$0", "description":"atom_to_stem_list(+In, -ListOfStems).\nCombines the three above routines, returning a list holding an atom with the stem of each word encountered and numbers for encountered numbers.", "prefix":"atom_to_stem_list" }, "atom_to_term/3": { "body":"atom_to_term(${1:Atom}, ${2:Term}, ${3:Bindings})$4\n$0", "description":"[deprecated]atom_to_term(+Atom, -Term, -Bindings).\nUse Atom as input to read_term/2 using the option variable_names and return the read term in Term and the variable bindings in Bindings. Bindings is a list of Name = Var couples, thus providing access to the actual variable names. See also read_term/2. If Atom has no valid syntax, a syntax_error exception is raised. New code should use read_term_from_atom/3.", "prefix":"atom_to_term" }, "atomic/1": { "body":"atomic(${1:Term})$2\n$0", "description":"[ISO]atomic(@Term).\nTrue if Term is bound (i.e., not a variable) and is not compound. Thus, atomic acts as if defined by: \n\natomic(Term) :-\n nonvar(Term),\n \\+ compound(Term).\n\n SWI-Prolog defines the following atomic datatypes: atom (atom/1), string (string/1), integer (integer/1), floating point number (float/1) and blob (blob/2). In addition, the symbol [] (empty list) is atomic, but not an atom. See section 5.1.\n\n", "prefix":"atomic" }, "atomic_concat/3": { "body":"atomic_concat(${1:Atomic1}, ${2:Atomic2}, ${3:Atom})$4\n$0", "description":"atomic_concat(+Atomic1, +Atomic2, -Atom).\nAtom represents the text after converting Atomic1 and Atomic2 to text and concatenating the result: \n\n?- atomic_concat(name, 42, X).\nX = name42.\n\n ", "prefix":"atomic_concat" }, "atomic_list_concat/2": { "body":"atomic_list_concat(${1:List}, ${2:Atom})$3\n$0", "description":"[commons]atomic_list_concat(+List, -Atom).\nList is a list of strings, atoms, integers or floating point numbers. Succeeds if Atom can be unified with the concatenated elements of List. Equivalent to atomic_list_concat(List, '', Atom).", "prefix":"atomic_list_concat" }, "atomic_list_concat/3": { "body":"atomic_list_concat(${1:List}, ${2:Separator}, ${3:Atom})$4\n$0", "description":"[commons]atomic_list_concat(+List, +Separator, -Atom).\nCreates an atom just like atomic_list_concat/2, but inserts Separator between each pair of inputs. For example: \n\n?- atomic_list_concat([gnu, gnat], ', ', A).\n\nA = 'gnu, gnat'\n\n The SWI-Prolog version of this predicate can also be used to split atoms by instantiating Separator and Atom as shown below. We kept this functionality to simplify porting old SWI-Prolog code where this predicate was called concat_atom/3. When used in mode (-,+,+), Separator must be a non-empty atom. See also split_string/4. \n\n\n\n?- atomic_list_concat(L, -, 'gnu-gnat').\n\nL = [gnu, gnat]\n\n ", "prefix":"atomic_list_concat" }, "atomics_to_string/2": { "body":"atomics_to_string(${1:List}, ${2:String})$3\n$0", "description":"atomics_to_string(+List, -String).\nList is a list of strings, atoms, integers or floating point numbers. Succeeds if String can be unified with the concatenated elements of List. Equivalent to atomics_to_string(List, '', String).", "prefix":"atomics_to_string" }, "atomics_to_string/3": { "body":"atomics_to_string(${1:List}, ${2:Separator}, ${3:String})$4\n$0", "description":"atomics_to_string(+List, +Separator, -String).\nCreates a string just like atomics_to_string/2, but inserts Separator between each pair of inputs. For example: \n\n?- atomics_to_string([gnu, \"gnat\", 1], ', ', A).\n\nA = \"gnu, gnat, 1\"\n\n ", "prefix":"atomics_to_string" }, "attr_portray_hook/2": { "body":"attr_portray_hook(${1:AttValue}, ${2:Var})$3\n$0", "description":"[deprecated]attr_portray_hook(+AttValue, +Var).\nCalled by write_term/2 and friends for each attribute if the option attributes(portray) is in effect. If the hook succeeds the attribute is considered printed. Otherwise Module = ... is printed to indicate the existence of a variable. This predicate is deprecated because it cannot work with pure interface predicates like copy_term/3. Use attribute_goals/3 instead to map attributes to residual goals.", "prefix":"attr_portray_hook" }, "attr_unify_hook/2": { "body":"attr_unify_hook(${1:AttValue}, ${2:VarValue})$3\n$0", "description":"attr_unify_hook(+AttValue, +VarValue).\nA hook that must be defined in the module to which an attributed variable refers. It is called after the attributed variable has been unified with a non-var term, possibly another attributed variable. AttValue is the attribute that was associated to the variable in this module and VarValue is the new value of the variable. If this predicate fails, the unification fails. If VarValue is another attributed variable the hook often combines the two attributes and associates the combined attribute with VarValue using put_attr/3. To be done: The way in which this hook currently works makes the implementation of important classes of constraint solvers impossible or at least extremely impractical. For increased generality and convenience, simultaneous unifications as in [X,Y]=[0,1] should be processed sequentially by the Prolog engine, or a more general hook should be provided in the future. See Triska, 2016 for more information.\n\n ", "prefix":"attr_unify_hook" }, "attribute_goals/1": { "body":"attribute_goals(${1:Var})$2\n$0", "description":"attribute_goals(+Var)//.\nThis nonterminal is the main mechanism in which residual constraints are obtained. It is called in every module where it is defined, and Var has an attribute. Its argument is that variable. In each module, attribute_goals/3 must describe a list of Prolog goals that are declaratively equivalent to the goals that caused the attributes of that module to be present and in their current state. It is always possible to do this (since these attributes stem from such goals), and it is the responsibility of constraint library authors to provide this mapping without exposing any library internals. Ideally and typically, remaining relevant attributes are mapped to pure and potentially simplified Prolog goals that satisfy both of the following: \n\nThey are declaratively equivalent to the constraints that were originally posted. \nThey use only predicates that are themselves exported and documented in the modules they stem from.\n\n The latter property ensures that users can reason about residual goals, and see for themselves whether a constraint library behaves correctly. It is this property that makes it possible to thoroughly test constraint solvers by contrasting obtained residual goals with expected answers. \n\nThis nonterminal is used by copy_term/3, on which the Prolog top level relies to ensure the basic invariant of pure Prolog programs: The answer is declaratively equivalent to the query. \n\nNote that instead of defaulty representations, a Prolog list is used to represent residual goals. This simplifies processing and reasoning about residual goals throughout all programs that need this functionality.\n\n", "prefix":"attribute_goals" }, "attvar/1": { "body":"attvar(${1:Term})$2\n$0", "description":"attvar(@Term).\nSucceeds if Term is an attributed variable. Note that var/1 also succeeds on attributed variables. Attributed variables are created with put_attr/3.", "prefix":"attvar" }, "autoload/0": { "body":"autoload$1\n$0", "description":"autoload.\nCheck the current Prolog program for predicates that are referred to, are undefined and have a definition in the Prolog library. Load the appropriate libraries. This predicate is used by qsave_program/[1,2] to ensure the saved state does not depend on availability of the libraries. The predicate autoload/0 examines all clauses of the loaded program (obtained with clause/2) and analyzes the body for referenced goals. Such an analysis cannot be complete in Prolog, which allows for the creation of arbitrary terms at runtime and the use of them as a goal. The current analysis is limited to the following: \n\n\n\nDirect goals appearing in the body\nArguments of declared meta-predicates that are marked with an integer (0..9). See meta_predicate/1.\n\n The analysis of meta-predicate arguments is limited to cases where the argument appears literally in the clause or is assigned using =/2 before the meta-call. That is, the following fragment is processed correctly: \n\n\n\n ...,\n Goal = prove(Theory),\n forall(current_theory(Theory),\n Goal)),\n\n But, the calls to prove_simple/1 and prove_complex/1 in the example below are not discovered by the analysis and therefore the modules that define these predicates must be loaded explicitly using use_module/1,2. \n\n\n\n ...,\n member(Goal, [ prove_simple(Theory),\n prove_complex(Theory)\n ]),\n forall(current_theory(Theory),\n Goal)),\n\n It is good practice to use gxref/0 to make sure that the program has sufficient declarations such that the analaysis tools can verify that all required predicates can be resolved and that all code is called. See meta_predicate/1, dynamic/1, public/1 and prolog:called_by/2.\n\n", "prefix":"autoload" }, "autoload_path/1": { "body":"autoload_path(${1:DirAlias})$2\n$0", "description":"autoload_path(+DirAlias).\nAdd DirAlias to the libraries that are used by the autoloader. This extends the search path autoload and reloads the library index. For example: \n\n:- autoload_path(library(http)).\n\n If this call appears as a directive, it is term-expanded into a clause for user:file_search_path/2 and a directive calling reload_library_index/0. This keeps source information and allows for removing this directive.\n\n", "prefix":"autoload_path" }, "b_getval/2": { "body":"b_getval(${1:Name}, ${2:Value})$3\n$0", "description":"b_getval(+Name, -Value).\nGet the value associated with the global variable Name and unify it with Value. Note that this unification may further instantiate the value of the global variable. If this is undesirable the normal precautions (double negation or copy_term/2) must be taken. The b_getval/2 predicate generates errors if Name is not an atom or the requested variable does not exist.", "prefix":"b_getval" }, "b_set_dict/3": { "body":"b_set_dict(${1:Key}, ${2:Dict}, ${3:Value})$4\n$0", "description":"[det]b_set_dict(+Key, !Dict, +Value).\nDestructively update the value associated with Key in Dict to Value. The update is trailed and undone on backtracking. This predicate raises an existence error if Key does not appear in Dict. The update semantics are equivalent to setarg/3 and b_setval/2.", "prefix":"b_set_dict" }, "b_setval/2": { "body":"b_setval(${1:Name}, ${2:Value})$3\n$0", "description":"b_setval(+Name, +Value).\nAssociate the term Value with the atom Name or replace the currently associated value with Value. If Name does not refer to an existing global variable, a variable with initial value [] is created (the empty list). On backtracking the assignment is reversed.", "prefix":"b_setval" }, "backward_compatibility:'$apropos_match'/2": { "body": ["\\$apropos_match(${1:Needle}, ${2:Haystack})$3\n$0" ], "description":" '$apropos_match'(+Needle, +Haystack) is semidet.\n\n True if Needle is a sub atom of Haystack. Ignores the case\n of Haystack.", "prefix":"$apropos_match" }, "backward_compatibility:'$arch'/2": { "body": ["\\$arch(${1:Architecture}, ${2:Version})$3\n$0" ], "description":" '$arch'(-Architecture, -Version) is det.\n\n @deprecated use current_prolog_flag(arch, Architecture)", "prefix":"$arch" }, "backward_compatibility:'$argv'/1": { "body": ["\\$argv(${1:Argv})$2\n$0" ], "description":" '$argv'(-Argv:list) is det.\n\n @deprecated use current_prolog_flag(os_argv, Argv) or\n current_prolog_flag(argv, Argv)", "prefix":"$argv" }, "backward_compatibility:'$declare_module'/3": { "body": ["\\$declare_module(${1:Module}, ${2:File}, ${3:Line})$4\n$0" ], "description":" '$declare_module'(Module, File, Line)\n\n Used in triple20 particle library. Should use a public interface", "prefix":"$declare_module" }, "backward_compatibility:'$home'/1": { "body": ["\\$home(${1:SWIPrologDir})$2\n$0" ], "description":" '$home'(-SWIPrologDir) is det.\n\n @deprecated use current_prolog_flag(home, SWIPrologDir)\n @see file_search_path/2, absolute_file_name/3, The Prolog home\n directory is available through the alias =swi=.", "prefix":"$home" }, "backward_compatibility:'$module'/2": { "body": ["\\$module(${1:OldTypeIn}, ${2:NewTypeIn})$3\n$0" ], "description":" '$module'(-OldTypeIn, +NewTypeIn)", "prefix":"$module" }, "backward_compatibility:'$set_prompt'/1": { "body": ["\\$set_prompt(${1:Prompt})$2\n$0" ], "description":" '$set_prompt'(+Prompt) is det.\n\n Set the prompt for the toplevel\n\n @deprecated use set_prolog_flag(toplevel_prompt, Prompt).", "prefix":"$set_prompt" }, "backward_compatibility:'$strip_module'/3": { "body": ["\\$strip_module(${1:Term}, ${2:Module}, ${3:Plain})$4\n$0" ], "description":" '$strip_module'(+Term, -Module, -Plain)\n\n This used to be an internal predicate. It was added to the XPCE\n compatibility library without $ and since then used at many\n places. From 5.4.1 onwards strip_module/3 is built-in and the $\n variation is added here for compatibility.\n\n @deprecated Use strip_module/3.", "prefix":"$strip_module" }, "backward_compatibility:'$version'/1": { "body": ["\\$version(${1:Version})$2\n$0" ], "description":" '$version'(Version:integer) is det.\n\n @deprecated use current_prolog_flag(version, Version)", "prefix":"$version" }, "backward_compatibility:'C'/3": { "body": ["C(${1:List}, ${2:Head}, ${3:Tail})$4\n$0" ], "description":" 'C'(?List, ?Head, ?Tail) is det.\n\n Used to be generated by DCG. Some people appear to be using in\n in normal code too.\n\n @deprecated Do not use in normal code; DCG no longer generates it.", "prefix":"C" }, "backward_compatibility:at_initialization/1": { "body": ["at_initialization(${1:Goal})$2\n$0" ], "description":" at_initialization(:Goal) is det.\n\n Register goal only to be run if a saved state is restored.\n\n @deprecated Use initialization(Goal, restore)", "prefix":"at_initialization" }, "backward_compatibility:checklist/2": { "body": ["checklist(${1:Goal}, ${2:List})$3\n$0" ], "description":" checklist(:Goal, +List)\n\n @deprecated Use maplist/2", "prefix":"checklist" }, "backward_compatibility:concat/3": { "body": ["concat(${1:Atom1}, ${2:Atom2}, ${3:Atom})$4\n$0" ], "description":" concat(+Atom1, +Atom2, -Atom) is det.\n\n @deprecated Use ISO atom_concat/3", "prefix":"concat" }, "backward_compatibility:concat_atom/2": { "body": ["concat_atom(${1:List}, ${2:Atom})$3\n$0" ], "description":" concat_atom(+List, -Atom) is det.\n\n Concatenate a list of atomic values to an atom.\n\n @deprecated Use atomic_list_concat/2 as proposed by the prolog\n commons initiative.", "prefix":"concat_atom" }, "backward_compatibility:concat_atom/3": { "body": ["concat_atom(${1:List}, ${2:Seperator}, ${3:Atom})$4\n$0" ], "description":" concat_atom(+List, +Seperator, -Atom) is det.\n\n Concatenate a list of atomic values to an atom, inserting Seperator\n between each consecutive elements.\n\n @deprecated Use atomic_list_concat/3 as proposed by the prolog\n commons initiative.", "prefix":"concat_atom" }, "backward_compatibility:convert_time/2": { "body": ["convert_time(${1:Stamp}, ${2:String})$3\n$0" ], "description":" convert_time(+Stamp, -String)\n\n Convert a time-stamp as obtained though get_time/1 into a textual\n representation using the C-library function ctime(). The value is\n returned as a SWI-Prolog string object (see section 4.23). See\n also convert_time/8.\n\n @deprecated Use format_time/3.", "prefix":"convert_time" }, "backward_compatibility:convert_time/8": { "body": [ "convert_time(${1:Stamp}, ${2:Y}, ${3:Mon}, ${4:Day}, ${5:Hour}, ${6:Min}, ${7:Sec}, ${8:MilliSec})$9\n$0" ], "description":" convert_time(+Stamp, -Y, -Mon, -Day, -Hour, -Min, -Sec, -MilliSec)\n\n Convert a time stamp, provided by get_time/1, time_file/2,\n etc. Year is unified with the year, Month with the month number\n (January is 1), Day with the day of the month (starting with 1),\n Hour with the hour of the day (0--23), Minute with the minute\n (0--59). Second with the second (0--59) and MilliSecond with the\n milliseconds (0--999). Note that the latter might not be accurate\n or might always be 0, depending on the timing capabilities of the\n system. See also convert_time/2.\n\n @deprecated Use stamp_date_time/3.", "prefix":"convert_time" }, "backward_compatibility:current_module/2": { "body": ["current_module(${1:Module}, ${2:File})$3\n$0" ], "description":" current_module(?Module, ?File) is nondet.\n\n True if Module is a module loaded from File.\n\n @deprecated Use module_property(Module, file(File))", "prefix":"current_module" }, "backward_compatibility:current_mutex/3": { "body": ["current_mutex(${1:Mutex}, ${2:Owner}, ${3:Count})$4\n$0" ], "description":" current_mutex(?Mutex, ?Owner, ?Count) is nondet.\n\n @deprecated Replaced by mutex_property/2", "prefix":"current_mutex" }, "backward_compatibility:current_thread/2": { "body": ["current_thread(${1:Thread}, ${2:Status})$3\n$0" ], "description":" current_thread(?Thread, ?Status) is nondet.\n\n @deprecated Replaced by thread_property/2", "prefix":"current_thread" }, "backward_compatibility:displayq/1": { "body": ["displayq(${1:Term})$2\n$0" ], "description":" displayq(@Term) is det.\n displayq(+Stream, @Term) is det.\n\n Write term ignoring operators and quote atoms.\n\n @deprecated Use write_term/3 or write_canonical/2.", "prefix":"displayq" }, "backward_compatibility:displayq/2": { "body": ["displayq(${1:Stream}, ${2:Term})$3\n$0" ], "description":" displayq(@Term) is det.\n displayq(+Stream, @Term) is det.\n\n Write term ignoring operators and quote atoms.\n\n @deprecated Use write_term/3 or write_canonical/2.", "prefix":"displayq" }, "backward_compatibility:eval_license/0": { "body": ["eval_license$1\n$0" ], "description":" eval_license is det.\n\n @deprecated Equivalent to license/0", "prefix":"eval_license" }, "backward_compatibility:export_list/2": { "body": ["export_list(${1:Module}, ${2:List})$3\n$0" ], "description":" export_list(+Module, -List) is det.\n\n Module exports the predicates of List.\n\n @deprecated Use module_property(Module, exports(List))", "prefix":"export_list" }, "backward_compatibility:feature/2": { "body": ["feature(${1:Key}, ${2:Value})$3\n$0" ], "description":" feature(?Key, ?Value) is nondet.\n set_feature(+Key, @Term) is det.\n\n Control Prolog flags.\n\n @deprecated Use ISO current_prolog_flag/2 and set_prolog_flag/2.", "prefix":"feature" }, "backward_compatibility:flush/0": { "body": ["flush$1\n$0" ], "description":" flush is det.\n\n @deprecated use ISO flush_output/0.", "prefix":"flush" }, "backward_compatibility:free_variables/2": { "body": ["free_variables(${1:Term}, ${2:Variables})$3\n$0" ], "description":" free_variables(+Term, -Variables)\n\n Return a list of unbound variables in Term. The name\n term_variables/2 is more widely used.\n\n @deprecated Use term_variables/2.", "prefix":"free_variables" }, "backward_compatibility:hash/1": { "body": ["hash(${1:PredInd})$2\n$0" ], "description":" hash(:PredInd) is det.\n\n Demands PredInd to be indexed using a hash-table. This is\n handled dynamically.", "prefix":"hash" }, "backward_compatibility:hash_term/2": { "body": ["hash_term(${1:Term}, ${2:Hash})$3\n$0" ], "description":" hash_term(+Term, -Hash) is det.\n\n If Term is ground, Hash is unified to an integer representing\n a hash for Term. Otherwise Hash is left unbound.\n\n @deprecated Use term_hash/2.", "prefix":"hash_term" }, "backward_compatibility:index/1": { "body": ["index(${1:Head})$2\n$0" ], "description":" index(:Head) is det.\n\n Prepare the predicate indicated by Head for multi-argument\n indexing.\n\n @deprecated As of version 5.11.29, SWI-Prolog performs\n just-in-time indexing on all arguments.", "prefix":"index" }, "backward_compatibility:lock_predicate/2": { "body": ["lock_predicate(${1:Name}, ${2:Arity})$3\n$0" ], "description":" lock_predicate(+Name, +Arity) is det.\n unlock_predicate(+Name, +Arity) is det.\n\n @deprecated see lock_predicate/1 and unlock_predicate/1.", "prefix":"lock_predicate" }, "backward_compatibility:merge/3": { "body": ["merge(${1:List1}, ${2:List2}, ${3:List3})$4\n$0" ], "description":" merge(+List1, +List2, -List3)\n\n Merge the ordered sets List1 and List2 into a new ordered list.\n Duplicates are not removed and their order is maintained.\n\n @deprecated The name of this predicate is far too general for\n a rather specific function.", "prefix":"merge" }, "backward_compatibility:merge_set/3": { "body": ["merge_set(${1:Set1}, ${2:Set2}, ${3:Set3})$4\n$0" ], "description":" merge_set(+Set1, +Set2, -Set3)\n\n Merge the ordered sets Set1 and Set2 into a new ordered set\n without duplicates.\n\n @deprecated New code should use ord_union/3 from\n library(ordsets)", "prefix":"merge_set" }, "backward_compatibility:message_queue_size/2": { "body": ["message_queue_size(${1:Queue}, ${2:Size})$3\n$0" ], "description":" message_queue_size(+Queue, -Size) is det.\n\n True if Queue holds Size terms.\n\n @deprecated Please use message_queue_property(Queue, Size)", "prefix":"message_queue_size" }, "backward_compatibility:proper_list/1": { "body": ["proper_list(${1:List})$2\n$0" ], "description":" proper_list(+List)\n\n Old SWI-Prolog predicate to check for a list that really ends\n in a []. There is not much use for the quick is_list, as in\n most cases you want to process the list element-by-element anyway.\n\n @deprecated Use ISO is_list/1.", "prefix":"proper_list" }, "backward_compatibility:read_clause/1": { "body": ["read_clause(${1:Term})$2\n$0" ], "description":" read_clause(-Term) is det.\n\n @deprecated Use read_clause/3 or read_term/3.", "prefix":"read_clause" }, "backward_compatibility:read_clause/2": { "body": ["read_clause(${1:Stream}, ${2:Term})$3\n$0" ], "description":" read_clause(+Stream, -Term) is det.\n\n @deprecated Use read_clause/3 or read_term/3.", "prefix":"read_clause" }, "backward_compatibility:read_pending_input/3": { "body": ["read_pending_input(${1:Stream}, ${2:Codes}, ${3:Tail})$4\n$0" ], "description":" read_pending_input(+Stream, -Codes, ?Tail) is det.\n\n @deprecated Use read_pending_codes/3.", "prefix":"read_pending_input" }, "backward_compatibility:read_variables/2": { "body": ["read_variables(${1:Term}, ${2:Bindings})$3\n$0" ], "description":" read_variables(-Term, -Bindings) is det.\n read_variables(+In:stream, -Term, -Bindings) is det.\n\n @deprecated Use ISO read_term/2 or read_term/3.", "prefix":"read_variables" }, "backward_compatibility:read_variables/3": { "body": ["read_variables(${1:In}, ${2:Term}, ${3:Bindings})$4\n$0" ], "description":" read_variables(-Term, -Bindings) is det.\n read_variables(+In:stream, -Term, -Bindings) is det.\n\n @deprecated Use ISO read_term/2 or read_term/3.", "prefix":"read_variables" }, "backward_compatibility:set_base_module/1": { "body": ["set_base_module(${1:Base})$2\n$0" ], "description":" set_base_module(:Base) is det.\n\n Set the default module from whic we inherit.\n\n @deprecated Equivalent to set_module(base(Base)).", "prefix":"set_base_module" }, "backward_compatibility:set_feature/2": { "body": ["set_feature(${1:Key}, ${2:Value})$3\n$0" ], "description":" feature(?Key, ?Value) is nondet.\n set_feature(+Key, @Term) is det.\n\n Control Prolog flags.\n\n @deprecated Use ISO current_prolog_flag/2 and set_prolog_flag/2.", "prefix":"set_feature" }, "backward_compatibility:setup_and_call_cleanup/3": { "body": ["setup_and_call_cleanup(${1:Setup}, ${2:Goal}, ${3:Cleanup})$4\n$0" ], "description":" setup_and_call_cleanup(:Setup, :Goal, :Cleanup).\n\n Call Cleanup once after Goal is finished.\n\n @deprecated Use setup_call_cleanup/3.", "prefix":"setup_and_call_cleanup" }, "backward_compatibility:setup_and_call_cleanup/4": { "body": [ "setup_and_call_cleanup(${1:Setup}, ${2:Goal}, ${3:Catcher}, ${4:Cleanup})$5\n$0" ], "description":" setup_and_call_cleanup(:Setup, :Goal, Catcher, :Cleanup).\n\n Call Cleanup once after Goal is finished, with Catcher\n unified to the reason\n\n @deprecated Use setup_call_cleanup/3.", "prefix":"setup_and_call_cleanup" }, "backward_compatibility:sformat/2": { "body": ["sformat(${1:String}, ${2:Format})$3\n$0" ], "description":" sformat(-String, +Format, +Args) is det.\n sformat(-String, +Format) is det.\n\n @deprecated Use format/3 as =|format(string(String), ...)|=", "prefix":"sformat" }, "backward_compatibility:sformat/3": { "body": ["sformat(${1:String}, ${2:Format}, ${3:Args})$4\n$0" ], "description":" sformat(-String, +Format, +Args) is det.\n sformat(-String, +Format) is det.\n\n @deprecated Use format/3 as =|format(string(String), ...)|=", "prefix":"sformat" }, "backward_compatibility:string_to_atom/2": { "body": ["string_to_atom(${1:String}, ${2:Atom})$3\n$0" ], "description":" string_to_atom(?String, ?Atom) is det.\n\n Bi-directional conversion between string and atom.\n\n @deprecated Use atom_string/2. Note that the order of the\n arguments is reversed.", "prefix":"string_to_atom" }, "backward_compatibility:string_to_list/2": { "body": ["string_to_list(${1:String}, ${2:Codes})$3\n$0" ], "description":" string_to_list(?String, ?Codes) is det.\n\n Bi-directional conversion between a string and a list of\n character codes.\n\n @deprecated Use string_codes/2.", "prefix":"string_to_list" }, "backward_compatibility:sublist/3": { "body": ["sublist(${1:Goal}, ${2:List1}, ${3:List2})$4\n$0" ], "description":" sublist(:Goal, +List1, ?List2)\n\n Succeeds if List2 unifies with a list holding those terms for wich\n call(Goal, Elem) succeeds.\n\n @deprecated Use include/3 from library(apply)\n @compat DEC10 library", "prefix":"sublist" }, "backward_compatibility:substring/4": { "body": ["substring(${1:String}, ${2:Offset}, ${3:Length}, ${4:Sub})$5\n$0" ], "description":" substring(+String, +Offset, +Length, -Sub)\n\n Predecessor of sub_string using 1-based Offset.\n\n @deprecated Use sub_string/5.", "prefix":"substring" }, "backward_compatibility:subsumes/2": { "body": ["subsumes(${1:Generic}, ${2:Specific})$3\n$0" ], "description":" subsumes(+Generic, @Specific)\n\n True if Generic is unified to Specific without changing\n Specific.\n\n @deprecated It turns out that calls to this predicate almost\n always should have used subsumes_term/2. Also the name is\n misleading. In case this is really needed, one is adviced to\n follow subsumes_term/2 with an explicit unification.", "prefix":"subsumes" }, "backward_compatibility:subsumes_chk/2": { "body": ["subsumes_chk(${1:Generic}, ${2:Specific})$3\n$0" ], "description":" subsumes_chk(@Generic, @Specific)\n\n True if Generic can be made equivalent to Specific without\n changing Specific.\n\n @deprecated Replace by subsumes_term/2.", "prefix":"subsumes_chk" }, "backward_compatibility:sumlist/2": { "body": ["sumlist(${1:List}, ${2:Sum})$3\n$0" ], "description":" sumlist(+List, -Sum) is det.\n\n True when Sum is the list of all numbers in List.\n\n @deprecated Use sum_list/2", "prefix":"sumlist" }, "backward_compatibility:unlock_predicate/2": { "body": ["unlock_predicate(${1:Name}, ${2:Arity})$3\n$0" ], "description":" lock_predicate(+Name, +Arity) is det.\n unlock_predicate(+Name, +Arity) is det.\n\n @deprecated see lock_predicate/1 and unlock_predicate/1.", "prefix":"unlock_predicate" }, "backward_compatibility:write_ln/1": { "body": ["write_ln(${1:X})$2\n$0" ], "description":" write_ln(X) is det\n\n @deprecated Use writeln(X).", "prefix":"write_ln" }, "bagof/3": { "body":"bagof(${1:Template}, ${2:Goal}, ${3:Bag})$4\n$0", "description":"[ISO]bagof(+Template, :Goal, -Bag).\nUnify Bag with the alternatives of Template. If Goal has free variables besides the one sharing with Template, bagof/3 will backtrack over the alternatives of these free variables, unifying Bag with the corresponding alternatives of Template. The construct +Var^Goal tells bagof/3 not to bind Var in Goal. bagof/3 fails if Goal has no solutions. The example below illustrates bagof/3 and the ^ operator. The variable bindings are printed together on one line to save paper. \n\n\n\n2 ?- listing(foo).\nfoo(a, b, c).\nfoo(a, b, d).\nfoo(b, c, e).\nfoo(b, c, f).\nfoo(c, c, g).\ntrue.\n\n3 ?- bagof(C, foo(A, B, C), Cs).\nA = a, B = b, C = G308, Cs = [c, d] ;\nA = b, B = c, C = G308, Cs = [e, f] ;\nA = c, B = c, C = G308, Cs = [g].\n\n4 ?- bagof(C, A^foo(A, B, C), Cs).\nA = G324, B = b, C = G326, Cs = [c, d] ;\nA = G324, B = c, C = G326, Cs = [e, f, g].\n\n5 ?-\n\n ", "prefix":"bagof" }, "base32:base32/2": { "body": ["base32(${1:Plain}, ${2:Encoded})$3\n$0" ], "description":" base32(+Plain, -Encoded) is det.\n base32(-Plain, +Encoded) is det.\n\n Translates between plaintext and base32 encoded atom or string.\n See also base32//1.", "prefix":"base32" }, "base32:base32/3": { "body": ["base32(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"base32('Param1','Param2','Param3')", "prefix":"base32" }, "base64:base64/2": { "body": ["base64(${1:Plain}, ${2:Encoded})$3\n$0" ], "description":" base64(+Plain, -Encoded) is det.\n base64(-Plain, +Encoded) is det.\n\n Translates between plaintext and base64 encoded atom or string.\n See also base64//1.", "prefix":"base64" }, "base64:base64/3": { "body": ["base64(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"base64('Param1','Param2','Param3')", "prefix":"base64" }, "base64:base64url/2": { "body": ["base64url(${1:Plain}, ${2:Encoded})$3\n$0" ], "description":" base64url(+Plain, -Encoded) is det.\n base64url(-Plain, +Encoded) is det.\n\n Translates between plaintext and base64url encoded atom or\n string. Base64URL encoded values can safely be used as URLs and\n file names. The use \"-\" instead of \"+\", \"_\" instead of \"/\" and\n do not use padding. This implies that the encoded value cannot\n be embedded inside a longer string.", "prefix":"base64url" }, "base64:base64url/3": { "body": ["base64url(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"base64url('Param1','Param2','Param3')", "prefix":"base64url" }, "bdb:bdb_close/1": { "body":"bdb_close(${1:DB})$2\n$0", "description":"[det]bdb_close(+DB).\nClose BerkeleyDB database indicated by DB. DB becomes invalid after this operation. An attempt to access a closed database is detected reliably and results in a permission_error exception.", "prefix":"bdb_close" }, "bdb:bdb_close_environment/1": { "body":"bdb_close_environment(${1:Environment})$2\n$0", "description":"[det]bdb_close_environment(+Environment).\nClose a database environment that was explicitly created using bdb_init/2.", "prefix":"bdb_close_environment" }, "bdb:bdb_closeall/0": { "body": ["bdb_closeall$1\n$0" ], "description":" bdb_closeall is det.\n\n Close all currently open databases and environments. This is\n called automatically after loading this library on process\n terminatation using at_halt/1.", "prefix":"bdb_closeall" }, "bdb:bdb_current/1": { "body":"bdb_current(${1:DB})$2\n$0", "description":"[nondet]bdb_current(?DB).\nTrue when DB is a handle to a currently open database.", "prefix":"bdb_current" }, "bdb:bdb_current_environment/1": { "body":"bdb_current_environment(${1:Environment})$2\n$0", "description":"[nondet]bdb_current_environment(-Environment).\nTrue when Environment is a currently known environment.", "prefix":"bdb_current_environment" }, "bdb:bdb_del/3": { "body":"bdb_del(${1:DB}, ${2:Key}, ${3:Value})$4\n$0", "description":"[nondet]bdb_del(+DB, ?Key, ?Value).\nDelete the first matching key-value pair from the database. If the database allows for duplicates, this predicate is non-deterministic, otherwise it is semidet. The enumeration performed by this predicate is the same as for bdb_get/3. See also bdb_delall/3.", "prefix":"bdb_del" }, "bdb:bdb_delall/3": { "body":"bdb_delall(${1:DB}, ${2:Key}, ${3:Value})$4\n$0", "description":"[det]bdb_delall(+DB, +Key, ?Value).\nDelete all matching key-value pairs from the database. With unbound Value the key and all values are removed efficiently.", "prefix":"bdb_delall" }, "bdb:bdb_enum/3": { "body":"bdb_enum(${1:DB}, ${2:Key}, ${3:Value})$4\n$0", "description":"bdb_enum(+DB, -Key, -Value).\nEnumerate the whole database, unifying the key-value pairs to Key and Value. Though this predicate can be used with an instantiated Key to enumerate only the keys unifying with Key, no indexing is used by bdb_enum/3.", "prefix":"bdb_enum" }, "bdb:bdb_environment_property/2": { "body":"bdb_environment_property(${1:Environment}, ${2:Property})$3\n$0", "description":"[nondet]bdb_environment_property(?Environment, ?Property).\nTrue when Property is a property of Environment. Defined properties are all boolean options defined with bdb_init/2 and the following options: home(-Path): Path is the absolute path name for the directory used as database environment.\n\nopen(-Boolean): True if the environment is open.\n\n ", "prefix":"bdb_environment_property" }, "bdb:bdb_get/3": { "body":"bdb_get(${1:DB}, ${2:Key}, ${3:Value})$4\n$0", "description":"[nondet]bdb_get(+DB, ?Key, -Value).\nQuery the database. If the database allows for duplicates this predicate is non-deterministic, otherwise it is semidet. Note that if Key is a term this matches stored keys that are variants of Key, not unification. See =@=/2. Thus, after bdb_put(DB, f(X), 42), we get the following query results: \n\nbdb_get(DB, f(Y), V) binds Value to 42, while Y is left unbound.\nbdb_get(DB, f(a), V) fails.\nbdb_enum(DB, f(a), V) succeeds, but does not perform any indexing, i.e., it enumerates all key-value pairs and performs the unification.\n\n", "prefix":"bdb_get" }, "bdb:bdb_getall/3": { "body":"bdb_getall(${1:DB}, ${2:Key}, ${3:Values})$4\n$0", "description":"[semidet]bdb_getall(+DB, +Key, -Values).\nGet all values associated with Key. Fails if the key does not exist (as bagof/3).", "prefix":"bdb_getall" }, "bdb:bdb_init/1": { "body":"bdb_init(${1:Options})$2\n$0", "description":"[det]bdb_init(+Options).\n", "prefix":"bdb_init" }, "bdb:bdb_init/2": { "body":"bdb_init(${1:Environment}, ${2:Options})$3\n$0", "description":"[det]bdb_init(-Environment, +Options).\nInitialise a DB environment. The predicate bdb_init/1 initialises the default environment, while bdb_init/2 creates an explicit environment that can be passed to bdb_open/4 using the environment(+Environment) option. If bdb_init/1 is called, it must be called before the first call to bdb_open/4 that uses the default environment. If bdb_init/1 is not called, the default environment can only handle plain files and does not support multiple threads, locking, crash recovery, etc. Initializing a BDB environment always requires the home(+Dir) option. If the environment contains no databases, the argument create(true) must be supplied as well. \n\nThe currently supported options are listed below. The name of the boolean options are derived from the DB flags by dropping the =DB_= prefix and using lowercase, e.g. DB_INIT_LOCK becomes init_lock. For details, please refer to the DB manual. \n\ncreate(+Bool): If true, create any underlying file as required. By default, no new files are created. This option should be set for prograns that create new databases.\n\nfailchk(+Bool): home(+Home): Specify the DB home directory, the directory holding the database files. The directory must exist prior to calling these predicates.\n\ninit_lock(+Bool): Enable locking (DB_INIT_LOCK). Implied if transactions are used.\n\ninit_log(+Bool): Enable logging the DB modifications (DB_INIT_LOG). Logging enables recovery of databases in case of system failure. Normally it is used in combination with transactions.\n\ninit_mpool(+Bool): Initialize memory pool. Impicit if mp_size(+Size) or mp_mmapsize(+Size) is specified.\n\ninit_rep(+Bool): Init database replication. The rest of the replication logic is not yet supported.\n\ninit_txn(+Bool): Init transactions. Implies init_log(true).\n\nlockdown(+Bool): mp_size(+Integer): mp_mmapsize(+Integer): Control memory pool handling (DB_INIT_MPOOL). The mp_size option sets the memory-pool used for caching, while the mp_mmapsize controls the maximum size of a DB file mapped entirely into memory.\n\nprivate(+Bool): recover(+Bool): Perform recovery before opening the database.\n\nrecover_fatal(+Bool): Perform fatal recovery before opening the database.\n\nregister(+Bool): server(+Host, [+ServerOptions]): Initialise the DB package for accessing a remote database. Host specifies the name of the machine running berkeley_db_svc. Optionally additional options may be specified: server_timeout(+Seconds)Specify the timeout time the server uses to determine that the client has gone. This implies the server will terminate the connection to this client if this client does not issue any requests for the indicated time.client_timeout(+Seconds)Specify the time the client waits for the server to handle a request. \n\nsystem_mem(+Bool): transactions(+Bool): Enable transactions, providing atomicy of changes and security. Implies logging and locking. See bdb_transaction/1.\n\nthread(+Bool): Make the environment accessible from multiple threads.\n\nthread_count(+Integer): Declare an approximate number of threads in the database environment. See DB_ENV->set_thread_count().\n\nuse_environ(+Bool): use_environ_root(+Bool): config(+ListOfConfig): Specify a list of configuration options, each option is of the form Name(Value). Currently unused.\n\n ", "prefix":"bdb_init" }, "bdb:bdb_open/4": { "body":"bdb_open(${1:File}, ${2:Mode}, ${3:DB}, ${4:Options})$5\n$0", "description":"[det]bdb_open(+File, +Mode, -DB, +Options).\nOpen File holding a database. Mode is one of read, providing read-only access or update, providing read/write access. Options is a list of options. Supported options are below. The boolean options are passed as flags to DB->open(). The option name is derived from the flag name by stripping the DB_ prefix and converting to lower case. Consult the Berkeley DB documentation for details. auto_commit(+Boolean): Open the database in a transaction. Ensures no database is created in case of failure.\n\ncreate(+Boolean): Create a new database of the database does not exist.\n\ndup(+Boolean): Do/do not allow for duplicate values on the same key. Default is not to allow for duplicates.\n\nexcl(+Boolean): Combined with create(true), fail if the database already exists.\n\nmultiversion(+Boolean): Open the database with support for multiversion concurrency control. The flag is passed, but no further support is provided yet.\n\nnommap(+Boolean): Do not map this database into process memory.\n\nrdonly(+Boolean): Open the database for reading only.\n\nread_uncommitted(+Boolean): Read operations on the database may request the return of modified but not yet committed data. This flag must be specified on all DB handles used to perform dirty reads or database updates, otherwise requests for dirty reads may not be honored and the read may block.\n\nthread(+Boolean): Enable access to the database handle from multiple threads. This is default if the corresponding flag is specified for the environment.\n\ntruncate(+Boolean): When specified, truncate the underlying file, i.e., start with an empty database.\n\ndatabase(+Name): If File contains multiple databases, address the named database in the file. A DB file can only consist of multiple databases if the bdb_open/4 call that created it specified this argument. Each database in the file has its own characteristics.\n\nenvironment(+Environment): Specify a database environment created using bdb_init/2.\n\nkey(+Type): value(+Type): Specify the type of the key or value. Allowed values are: termKey/Value is a Prolog term (default). This type allows for representing arbitrary Prolog data in both keys and value. The representation is space-efficient, but Prolog specific. See PL_record_external() in the SWI-Prolog Reference Manual for details on the representation. The other representations are more neutral. This implies they are more stable and sharing the DB with other languages is feasible.atomKey/Value is an atom. The text is represented as a UTF-8 string and its length.c_blobKey/Value is a blob (sequence of bytes). On output, a Prolog string is used. The input is either a Prolog string or an atom holding only characters in the range [0..255].c_stringKey/Value is an atom. The text is represented as a C 0-terminated UTF-8 string.c_longKey/Value is an integer. The value is represented as a native C long in machine byte-order. \n\n DB is unified with a blob of type db. Database handles are subject to atom garbage collection. Errors: permission_error(access, bdb_environment, Env) if an environment is not thread-enabled and accessed from multiple threads.\n\n ", "prefix":"bdb_open" }, "bdb:bdb_put/3": { "body":"bdb_put(${1:DB}, ${2:Key}, ${3:Value})$4\n$0", "description":"[det]bdb_put(+DB, +Key, +Value).\nAdd a new key-value pair to the database. If the database does not allow for duplicates the possible previous associated with Key is replaced by Value.", "prefix":"bdb_put" }, "bdb:bdb_transaction/1": { "body":"bdb_transaction(${1:Goal})$2\n$0", "description":"[semidet]bdb_transaction(:Goal).\n", "prefix":"bdb_transaction" }, "bdb:bdb_transaction/2": { "body":"bdb_transaction(${1:Environment}, ${2:Goal})$3\n$0", "description":"[semidet]bdb_transaction(+Environment, :Goal).\nStart a transaction, execute Goal and terminate the transaction. Only if Goal succeeds, the transaction is commited. If Goal fails or raises an exception, the transaction is aborted and bdb_transaction/1 either fails or rethrows the exception. Of special interest is the exception \n\nerror(package(db, deadlock), _)\n\n This exception indicates a deadlock was raised by one of the DB predicates. Deadlocks may arise if multiple processes or threads access the same keys in a different order. The DB infra-structure causes one of the processes involved in the deadlock to abort its transaction. This process may choose to restart the transaction. \n\nFor example, a DB application may define {Goal} to realise transactions and restart these automatically is a deadlock is raised: \n\n\n\n{Goal} :-\n catch(bdb_transaction(Goal), E, true),\n ( var(E)\n -> true\n ; E = error(package(db, deadlock), _)\n -> {Goal}\n ; throw(E)\n ).\n\n Environment defines the environment to which the transaction applies. If omitted, the default environment is used. See bdb_init/1 and bdb_init/2. ", "prefix":"bdb_transaction" }, "bdb:bdb_version/1": { "body":"bdb_version(${1:Version})$2\n$0", "description":"[det]bdb_version(-Version:integer).\nTrue when Version identifies the database version. Version is an integer defined as: \n\nDB_VERSION_MAJOR*10000 +\nDB_VERSION_MINOR*100 +\nDB_VERSION_PATCH\n\n \n\n", "prefix":"bdb_version" }, "begin_tests/1": { "body":"begin_tests(${1:Name})$2\n$0", "description":"begin_tests(+Name).\nStart named test-unit. Same as begin_tests(Name, []).", "prefix":"begin_tests" }, "begin_tests/2": { "body":"begin_tests(${1:Name}, ${2:Options})$3\n$0", "description":"begin_tests(+Name, +Options).\nStart named test-unit with options. Options provide conditional processing, setup and cleanup similar to individual tests (second argument of test/2 rules). Defined options are: \n\nblocked(+Reason): Test-unit has been blocked for the given Reason.\n\ncondition(:Goal): Executed before executing any of the tests. If Goal fails, the test of this unit is skipped.\n\nsetup(:Goal): Executed before executing any of the tests.\n\ncleanup(:Goal): Executed after completion of all tests in the unit.\n\nsto(+Terms): Specify default for subject-to-occurs-check mode. See section 2 for details on the sto option.\n\n ", "prefix":"begin_tests" }, "between/3": { "body":"between(${1:Low}, ${2:High}, ${3:Value})$4\n$0", "description":"between(+Low, +High, ?Value).\nLow and High are integers, High >=Low. If Value is an integer, Low ==Low, a feature that is particularly interesting for generating integers from a certain value.", "prefix":"between" }, "blob/2": { "body":"blob(${1:Term}, ${2:Type})$3\n$0", "description":"blob(@Term, ?Type).\nTrue if Term is a blob of type Type. See section 11.4.7.", "prefix":"blob" }, "bounds:all_different/1": { "body": ["all_different(${1:'Param1'})$2\n$0" ], "description":"all_different('Param1')", "prefix":"all_different" }, "bounds:check/1": { "body": ["check(${1:'Param1'})$2\n$0" ], "description":"check('Param1')", "prefix":"check" }, "bounds:in/2": { "body": ["in(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"in('Param1','Param2')", "prefix":"in" }, "bounds:indomain/1": { "body": ["indomain(${1:'Param1'})$2\n$0" ], "description":"indomain('Param1')", "prefix":"indomain" }, "bounds:label/1": { "body": ["label(${1:'Param1'})$2\n$0" ], "description":"label('Param1')", "prefix":"label" }, "bounds:labeling/2": { "body": ["labeling(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"labeling('Param1','Param2')", "prefix":"labeling" }, "bounds:lex_chain/1": { "body": ["lex_chain(${1:'Param1'})$2\n$0" ], "description":"lex_chain('Param1')", "prefix":"lex_chain" }, "bounds:serialized/2": { "body": ["serialized(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"serialized('Param1','Param2')", "prefix":"serialized" }, "bounds:sum/3": { "body": ["sum(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"sum('Param1','Param2','Param3')", "prefix":"sum" }, "bounds:tuples_in/2": { "body": ["tuples_in(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"tuples_in('Param1','Param2')", "prefix":"tuples_in" }, "break/0": { "body":"break$1\n$0", "description":"break.\nRecursively start a new Prolog top level. This Prolog top level shares everything from the environment it was started in. Debugging is switched off on entering a break and restored on leaving one. The break environment is terminated by typing the system's end-of-file character (control-D). If that is somehow not functional, the term end_of_file. can be entered to return from the break environment. If the -t toplevel command line option is given, this goal is started instead of entering the default interactive top level (prolog/0). Notably the gui based versions (swipl-win on Windows and MacOS) provide the menu Run/New thread that opens a new toplevel that runs concurrently with the initial toplevel. The concurrent toplevel can be used to examine the program, in particular global dynamic predicates. It can not access global variables or thread-local dynamic predicates (see thread_local/1) of the main thread.\n\n", "prefix":"break" }, "broadcast:broadcast/1": { "body":"broadcast(${1:Term})$2\n$0", "description":"broadcast(+Term).\nBroadcast Term. There are no limitations to Term, though being a global service, it is good practice to use a descriptive and unique principal functor. All associated goals are started and regardless of their success or failure, broadcast/1 always succeeds. Exceptions are passed.", "prefix":"broadcast" }, "broadcast:broadcast_request/1": { "body":"broadcast_request(${1:Term})$2\n$0", "description":"broadcast_request(+Term).\nUnlike broadcast/1, this predicate stops if an associated goal succeeds. Backtracking causes it to try other listeners. A broadcast request is used to fetch information without knowing the identity of the agent providing it. C.f. ``Is there someone who knows the age of John?'' could be asked using \n\n ...,\n broadcast_request(age_of('John', Age)),\n\n If there is an agent (listener) that registered an `age-of' service and knows about the age of `John' this question will be answered.\n\n", "prefix":"broadcast_request" }, "broadcast:listen/2": { "body":"listen(${1:Template}, ${2:Goal})$3\n$0", "description":"listen(+Template, :Goal).\nRegister a listen channel. Whenever a term unifying Template is broadcasted, call Goal. The following example traps all broadcasted messages as a variable unifies to any message. It is commonly used to debug usage of the library. \n\n?- listen(Term, (writeln(Term),fail)).\n?- broadcast(hello(world)).\nhello(world)\ntrue.\n\n ", "prefix":"listen" }, "broadcast:listen/3": { "body":"listen(${1:Listener}, ${2:Template}, ${3:Goal})$4\n$0", "description":"listen(+Listener, +Template, :Goal).\nDeclare Listener as the owner of the channel. Unlike a channel opened using listen/2, channels that have an owner can terminate the channel. This is commonly used if an object is listening to broadcast messages. In the example below we define a `name-item' displaying the name of an identifier represented by the predicate name_of/2. \n\n:- pce_begin_class(name_item, text_item).\n\nvariable(id, any, get, \"Id visualised\").\n\ninitialise(NI, Id:any) :->\n name_of(Id, Name),\n send_super(NI, initialise, name, Name,\n message(NI, set_name, @arg1)),\n send(NI, slot, id, Id),\n listen(NI, name_of(Id, Name),\n send(NI, selection, Name)).\n\nunlink(NI) :->\n unlisten(NI),\n send_super(NI, unlink).\n\nset_name(NI, Name:name) :->\n get(NI, id, Id),\n retractall(name_of(Id, _)),\n assert(name_of(Id, Name)),\n broadcast(name_of(Id, Name)).\n\n:- pce_end_class.\n\n ", "prefix":"listen" }, "broadcast:listening/3": { "body":"listening(${1:Listener}, ${2:Template}, ${3:Goal})$4\n$0", "description":"listening(?Listener, ?Template, ?Goal).\nExamine the current listeners. This predicate is useful for debugging purposes.", "prefix":"listening" }, "broadcast:unlisten/1": { "body":"unlisten(${1:Listener})$2\n$0", "description":"unlisten(+Listener).\nDeregister all entries created with listen/3 whose Listener unify.", "prefix":"unlisten" }, "broadcast:unlisten/2": { "body":"unlisten(${1:Listener}, ${2:Template})$3\n$0", "description":"unlisten(+Listener, +Template).\nDeregister all entries created with listen/3 whose Listener and Template unify.", "prefix":"unlisten" }, "broadcast:unlisten/3": { "body":"unlisten(${1:Listener}, ${2:Template}, ${3:Goal})$4\n$0", "description":"unlisten(+Listener, +Template, :Goal).\nDeregister all entries created with listen/3 whose Listener, Template and Goal unify.", "prefix":"unlisten" }, "byte_count/2": { "body":"byte_count(${1:Stream}, ${2:Count})$3\n$0", "description":"byte_count(+Stream, -Count).\nByte position in Stream. For binary streams this is the same as character_count/2. For text files the number may be different due to multi-byte encodings or additional record separators (such as Control-M in Windows).", "prefix":"byte_count" }, "c14n2:xml_write_canonical/3": { "body":"xml_write_canonical(${1:Stream}, ${2:DOM}, ${3:Options})$4\n$0", "description":"[det]xml_write_canonical(+Stream, +DOM, +Options).\nWrite an XML DOM using the canonical conventions as defined by C14n2. Namespace declarations in the canonical document depend on the original namespace declarations. For this reason the input document must be parsed (see load_structure/3) using the dialect xmlns and the option keep_prefix(true).", "prefix":"xml_write_canonical" }, "c14n2:xsd_time_string/3": { "body":"xsd_time_string(${1:DateTime}, ${2:Type}, ${3:String})$4\n$0", "description":"[det]xsd_time_string(?DateTime, ?Type, ?String).\nSerialize and deserialize the XSD date and time formats. The converion is represented by the table below. \n\nProlog term Type XSD string date(Y,M,D)xsd:dateYYYY-MM-DD date_time(Y,M,D,H,Mi,S)xsd:dateTimeYYYY-MM-DDTHH-MM-SS date_time(Y,M,D,H,Mi,S,0)xsd:dateTimeYYYY-MM-DDTHH-MM-SSZ date_time(Y,M,D,H,Mi,S,TZ)xsd:dateTimeYYYY-MM-DDTHH-MM-SS[+-]HH:MM time(H,M,S)xsd:timeHH:MM:SS year_month(Y,M)xsd:gYearMonthYYYY-MM month_day(M,D)xsd:gMonthDayMM-DD Dxsd:gDayDD Mxsd:gMonthMM Yxsd:gYearYYYY For the Prolog term all variables denote integers except for S, which represents seconds as either an integer or float. The TZ argument is the offset from UTC in seconds. The Type is written as xsd:name, but is in fact the full URI of the XSD data type, e.g., http://www.w3.org/2001/XMLSchema#date. In the XSD string notation, the letters YMDHS denote digits. The notation SS is either a two-digit integer or a decimal number with two digits before the floating point, e.g. 05.3 to denote 5.3 seconds. \n\nFor most conversions, Type may be specified unbound and is unified with the resulting type. For ambiguous conversions, Type must be specified or an instantiation_error is raised. When converting from Prolog to XSD serialization, D, M and Y are ambiguous. When convertion from XSD serialization to Prolog, only DD and MM are ambiguous. If Type and String are both given and String is a valid XSD date/time representation but not matching Type a syntax error with the shape syntax_error(Type) is raised. If DateTime and Type are both given and DateTime does not satisfy Type a domain_error of the shape domain_error(xsd_time(Type), DateTime) is raised. \n\nThe domain of numerical values is verified and a corresponding domain_error exception is raised if the domain is violated. There is no test for the existence of a date and thus \"2016-02-31\", although non-existing is accepted as valid.\n\n", "prefix":"xsd_time_string" }, "call/1": { "body":"call(${1:Goal})$2\n$0", "description":"[ISO]call(:Goal).\nInvoke Goal as a goal. Note that clauses may have variables as subclauses, which is identical to call/1.", "prefix":"call" }, "call/3": { "body":"call(${1:Goal}, ${2:ExtraArg1}, ${3:...})$4\n$0", "description":"[ISO]call(:Goal, +ExtraArg1, ...).\nAppend ExtraArg1, ExtraArg2, ... to the argument list of Goal and call the result. For example, call(plus(1), 2, X) will call plus(1, 2, X), binding X to 3. The call/[2..] construct is handled by the compiler. The predicates call/[2-8] are defined as real (meta-)predicates and are available to inspection through current_predicate/1, predicate_property/2, etc.59Arities 2..8 are demanded by ISO/IEC 13211-1:1995/Cor.2:2012. Higher arities are handled by the compiler and runtime system, but the predicates are not accessible for inspection.60Future versions of the reflective predicate may fake the presence of call/9.. . Full logical behaviour, generating all these pseudo predicates, is probably undesirable and will become impossible if max_arity is removed.\n\n", "prefix":"call" }, "call_cleanup/2": { "body":"call_cleanup(${1:Goal}, ${2:Cleanup})$3\n$0", "description":"call_cleanup(:Goal, :Cleanup).\nSame as setup_call_cleanup(true, Goal, Cleanup). This is provided for compatibility with a number of other Prolog implementations only. Do not use call_cleanup/2 if you perform side-effects prior to calling that will be undone by Cleanup. Instead, use setup_call_cleanup/3 with an appropriate first argument to perform those side-effects.", "prefix":"call_cleanup" }, "call_cleanup/3": { "body":"call_cleanup(${1:Goal}, ${2:Catcher}, ${3:Cleanup})$4\n$0", "description":"call_cleanup(:Goal, +Catcher, :Cleanup).\nSame as setup_call_catcher_cleanup(true, Goal, Catcher, Cleanup). The same warning as for call_cleanup/2 applies.", "prefix":"call_cleanup" }, "call_dcg/3": { "body":"call_dcg(${1:DCGBody}, ${2:State0}, ${3:State})$4\n$0", "description":"call_dcg(:DCGBody, ?State0, ?State).\nAs phrase/3, but without type checking State0 and State. This allows for using DCG rules for threading an arbitrary state variable. This predicate was introduced after type checking was added to phrase/3.66After discussion with Samer Abdallah. A portable solution for threading state through a DCG can be implemented by wrapping the state in a list and use the DCG semicontext facility. Subsequently, the following predicates may be used to access and modify the state:67This solution was proposed by Markus Triska. \n\n\n\nstate(S), [S] --> [S].\nstate(S0, S), [S] --> [S0].\n\n \n\n", "prefix":"call_dcg" }, "call_residue_vars/2": { "body":"call_residue_vars(${1:Goal}, ${2:Vars})$3\n$0", "description":"call_residue_vars(:Goal, -Vars).\nFind residual attributed variables left by Goal. This predicate is intended for reasoning about and debugging programs that use coroutining or constraints. To see why this predicate is necessary, consider a predicate that poses contradicting constraints on a variable, and where that variable does not appear in any argument of the predicate and hence does not yield any residual goals on the toplevel when the predicate is invoked. Such programs should fail, but sometimes succeed because the constraint solver is too weak to detect the contradiction. Ideally, delayed goals and constraints are all executed at the end of the computation. The meta predicate call_residue_vars/2 finds variables that are given attributes or whose attributes are modified by Goal, regardless of whether or not these variables are reachable from the arguments of Goal.146The implementation of call_residue_vars/2 is completely redone in version 7.3.2 (7.2.1) after discussion with Bart Demoen. The current implementation no longer performs full scans of the stacks. The overhead is proportional to the number of attributed variables on the stack, dead or alive..", "prefix":"call_residue_vars" }, "call_with_depth_limit/3": { "body":"call_with_depth_limit(${1:Goal}, ${2:Limit}, ${3:Result})$4\n$0", "description":"call_with_depth_limit(:Goal, +Limit, -Result).\nIf Goal can be proven without recursion deeper than Limit levels, call_with_depth_limit/3 succeeds, binding Result to the deepest recursion level used during the proof. Otherwise, Result is unified with depth_limit_exceeded if the limit was exceeded during the proof, or the entire predicate fails if Goal fails without exceeding Limit. The depth limit is guarded by the internal machinery. This may differ from the depth computed based on a theoretical model. For example, true/0 is translated into an inline virtual machine instruction. Also, repeat/0 is not implemented as below, but as a non-deterministic foreign predicate. \n\n\n\nrepeat.\nrepeat :-\n repeat.\n\n As a result, call_with_depth_limit/3 may still loop infinitely on programs that should theoretically finish in finite time. This problem can be cured by using Prolog equivalents to such built-in predicates. \n\nThis predicate may be used for theorem provers to realise techniques like iterative deepening. See also call_with_inference_limit/3. It was implemented after discussion with Steve Moyle smoyle@ermine.ox.ac.uk.\n\n", "prefix":"call_with_depth_limit" }, "call_with_inference_limit/3": { "body":"call_with_inference_limit(${1:Goal}, ${2:Limit}, ${3:Result})$4\n$0", "description":"call_with_inference_limit(:Goal, +Limit, -Result).\nEquivalent to call(Goal), but limits the number of inferences for each solution of Goal.61This predicate was realised after discussion with Ulrich Neumerkel and Markus Triska.. Execution may terminate as follows: \n\nIf Goal does not terminate before the inference limit is exceeded, Goal is aborted by injecting the exception inference_limit_exceeded into its execution. After termination of Goal, Result is unified with the atom inference_limit_exceeded. Otherwise,\nIf Goal fails, call_with_inference_limit/3 fails.\nIf Goal succeeds without a choice point, Result is unified with !.\nIf Goal succeeds with a choice point, Result is unified with true.\nIf Goal throws an exception, call_with_inference_limit/3 re-throws the exception.\n\n An inference is defined as a call or redo on a predicate. Please note that some primitive built-in predicates are compiled to virtual machine instructions for which inferences are not counted. The execution of predicates defined in other languages (e.g., C, C++) count as a single inference. This includes potentially expensive built-in predicates such as sort/2. \n\nCalls to this predicate may be nested. An inner call that sets the limit below the current is honoured. An inner call that would terminate after the current limit does not change the effective limit. See also call_with_depth_limit/3 and call_with_time_limit/2.\n\n", "prefix":"call_with_inference_limit" }, "callable/1": { "body":"callable(${1:Term})$2\n$0", "description":"[ISO]callable(@Term).\nTrue if Term is bound to an atom or a compound term. This was intended as a type-test for arguments to call/1 and call/2.. Note that callable only tests the surface term. Terms such as (22,true) are considered callable, but cause call/1 to raise a type error. Module-qualification of meta-argument (see meta_predicate/1) using :/2 causes callable to succeed on any meta-argument.52We think that callable/1 should be deprecated and there should be two new predicates, one performing a test for callable that is minimally module aware and possibly consistent with type-checking in call/1 and a second predicate that tests for atom or compound. Consider the program and query below: \n\n:- meta_predicate p(0).\n\np(G) :- callable(G), call(G).\n\n?- p(22).\nERROR: Type error: `callable' expected, found `22'\nERROR: In:\nERROR: [6] p(user:22)\n\n ", "prefix":"callable" }, "cancel_halt/1": { "body":"cancel_halt(${1:Reason})$2\n$0", "description":"cancel_halt(+Reason).\nIf this predicate is called from a hook registered with at_halt/1, halting Prolog is cancelled and an informational message is printed that includes Reason. This is used by the development tools to cancel halting the system if the editor has unsafed data and the user decides to cancel.", "prefix":"cancel_halt" }, "catch/3": { "body":"catch(${1:Goal}, ${2:Catcher}, ${3:Recover})$4\n$0", "description":"[ISO]catch(:Goal, +Catcher, :Recover).\nBehaves as call/1 if no exception is raised when executing Goal. If an exception is raised using throw/1 while Goal executes, and the Goal is the innermost goal for which Catcher unifies with the argument of throw/1, all choice points generated by Goal are cut, the system backtracks to the start of catch/3 while preserving the thrown exception term, and Recover is called as in call/1. The overhead of calling a goal through catch/3 is comparable to call/1. Recovery from an exception is much slower, especially if the exception term is large due to the copying thereof.\n\n", "prefix":"catch" }, "ceil/1": { "body":"ceil(${1:Expr})$2\n$0", "description":"ceil(+Expr).\nSame as ceiling/1 (backward compatibility).", "prefix":"ceil" }, "ceiling/1": { "body":"ceiling(${1:Expr})$2\n$0", "description":"[ISO]ceiling(+Expr).\nEvaluate Expr and return the smallest integer larger or equal to the result of the evaluation.", "prefix":"ceiling" }, "cgi:cgi_get_form/1": { "body": ["cgi_get_form(${1:Form})$2\n$0" ], "description":" cgi_get_form(-Form)\n\n Decodes standard input and the environment variables to obtain a\n list of arguments passed to the CGI script. This predicate both\n deals with the CGI *GET* method as well as the *POST* method. If\n the data cannot be obtained, an existence_error exception is\n raised.\n\n @param Form is a list of Name(Value) terms.", "prefix":"cgi_get_form" }, "char_code/2": { "body":"char_code(${1:Atom}, ${2:Code})$3\n$0", "description":"[ISO]char_code(?Atom, ?Code).\nConvert between character and character code for a single character.95This is also called atom_char/2 in older versions of SWI-Prolog as well as some other Prolog implementations. The atom_char/2 predicate is available from the library backcomp.pl", "prefix":"char_code" }, "char_conversion/2": { "body":"char_conversion(${1:CharIn}, ${2:CharOut})$3\n$0", "description":"[ISO]char_conversion(+CharIn, +CharOut).\nDefine that term input (see read_term/3) maps each character read as CharIn to the character CharOut. Character conversion is only executed if the Prolog flag char_conversion is set to true and not inside quoted atoms or strings. The initial table maps each character onto itself. See also current_char_conversion/2.", "prefix":"char_conversion" }, "char_type/2": { "body":"char_type(${1:Char}, ${2:Type})$3\n$0", "description":"char_type(?Char, ?Type).\nTests or generates alternative Types or Chars. The character types are inspired by the standard C primitives. alnum: Char is a letter (upper- or lowercase) or digit.\n\nalpha: Char is a letter (upper- or lowercase).\n\ncsym: Char is a letter (upper- or lowercase), digit or the underscore (_). These are valid C and Prolog symbol characters.\n\ncsymf: Char is a letter (upper- or lowercase) or the underscore (_). These are valid first characters for C and Prolog symbols.\n\nascii: Char is a 7-bit ASCII character (0..127).\n\nwhite: Char is a space or tab, i.e. white space inside a line.\n\ncntrl: Char is an ASCII control character (0..31).\n\ndigit: Char is a digit.\n\ndigit(Weight): Char is a digit with value Weight. I.e. char_type(X, digit(6) yields X = '6'. Useful for parsing numbers.\n\nxdigit(Weight): Char is a hexadecimal digit with value Weight. I.e. char_type(a, xdigit(X) yields X = '10'. Useful for parsing numbers.\n\ngraph: Char produces a visible mark on a page when printed. Note that the space is not included!\n\nlower: Char is a lowercase letter.\n\nlower(Upper): Char is a lowercase version of Upper. Only true if Char is lowercase and Upper uppercase.\n\nto_lower(Upper): Char is a lowercase version of Upper. For non-letters, or letter without case, Char and Lower are the same. See also upcase_atom/2 and downcase_atom/2.\n\nupper: Char is an uppercase letter.\n\nupper(Lower): Char is an uppercase version of Lower. Only true if Char is uppercase and Lower lowercase.\n\nto_upper(Lower): Char is an uppercase version of Lower. For non-letters, or letter without case, Char and Lower are the same. See also upcase_atom/2 and downcase_atom/2.\n\npunct: Char is a punctuation character. This is a graph character that is not a letter or digit.\n\nspace: Char is some form of layout character (tab, vertical tab, newline, etc.).\n\nend_of_file: Char is -1.\n\nend_of_line: Char ends a line (ASCII: 10..13).\n\nnewline: Char is a newline character (10).\n\nperiod: Char counts as the end of a sentence (.,!,?).\n\nquote: Char is a quote character (\", ', `).\n\nparen(Close): Char is an open parenthesis and Close is the corresponding close parenthesis.\n\nprolog_var_start: Char can start a Prolog variable name.\n\nprolog_atom_start: Char can start a unquoted Prolog atom that is not a symbol.\n\nprolog_identifier_continue: Char can continue a Prolog variable name or atom.\n\nprolog_prolog_symbol: Char is a Prolog symbol character. Sequences of Prolog symbol characters glue together to form an unquoted atom. Examples are =.., \\=, etc.\n\n ", "prefix":"char_type" }, "character_count/2": { "body":"character_count(${1:Stream}, ${2:Count})$3\n$0", "description":"character_count(+Stream, -Count).\nUnify Count with the current character index. For input streams this is the number of characters read since the open; for output streams this is the number of characters written. Counting starts at 0.", "prefix":"character_count" }, "charsio:atom_to_chars/2": { "body":"atom_to_chars(${1:Atom}, ${2:Codes})$3\n$0", "description":"[det]atom_to_chars(+Atom, -Codes).\nConvert Atom into a list of character codes. deprecated: Use ISO atom_codes/2.\n\n ", "prefix":"atom_to_chars" }, "charsio:atom_to_chars/3": { "body":"atom_to_chars(${1:Atom}, ${2:Codes}, ${3:Tail})$4\n$0", "description":"[det]atom_to_chars(+Atom, -Codes, ?Tail).\nConvert Atom into a difference list of character codes.", "prefix":"atom_to_chars" }, "charsio:format_to_chars/3": { "body":"format_to_chars(${1:Format}, ${2:Args}, ${3:Codes})$4\n$0", "description":"[det]format_to_chars(+Format, +Args, -Codes).\nUse format/2 to write to a list of character codes.", "prefix":"format_to_chars" }, "charsio:format_to_chars/4": { "body":"format_to_chars(${1:Format}, ${2:Args}, ${3:Codes}, ${4:Tail})$5\n$0", "description":"[det]format_to_chars(+Format, +Args, -Codes, ?Tail).\nUse format/2 to write to a difference list of character codes.", "prefix":"format_to_chars" }, "charsio:number_to_chars/2": { "body":"number_to_chars(${1:Number}, ${2:Codes})$3\n$0", "description":"[det]number_to_chars(+Number, -Codes).\nConvert Atom into a list of character codes. deprecated: Use ISO number_codes/2.\n\n ", "prefix":"number_to_chars" }, "charsio:number_to_chars/3": { "body":"number_to_chars(${1:Number}, ${2:Codes}, ${3:Tail})$4\n$0", "description":"[det]number_to_chars(+Number, -Codes, ?Tail).\nConvert Number into a difference list of character codes.", "prefix":"number_to_chars" }, "charsio:open_chars_stream/2": { "body":"open_chars_stream(${1:Codes}, ${2:Stream})$3\n$0", "description":"[det]open_chars_stream(+Codes, -Stream).\nOpen Codes as an input stream. See also: open_string/2.\n\n ", "prefix":"open_chars_stream" }, "charsio:read_from_chars/2": { "body":"read_from_chars(${1:Codes}, ${2:Term})$3\n$0", "description":"[det]read_from_chars(+Codes, -Term).\nRead Codes into Term. Compatibility: The SWI-Prolog version does not require Codes to end in a full-stop.\n\n ", "prefix":"read_from_chars" }, "charsio:read_term_from_chars/3": { "body":"read_term_from_chars(${1:Codes}, ${2:Term}, ${3:Options})$4\n$0", "description":"[det]read_term_from_chars(+Codes, -Term, +Options).\nRead Codes into Term. Options are processed by read_term/3. Compatibility: sicstus\n\n ", "prefix":"read_term_from_chars" }, "charsio:with_output_to_chars/2": { "body":"with_output_to_chars(${1:Goal}, ${2:Codes})$3\n$0", "description":"[det]with_output_to_chars(:Goal, -Codes).\nRun Goal as with once/1. Output written to current_output is collected in Codes.", "prefix":"with_output_to_chars" }, "charsio:with_output_to_chars/3": { "body":"with_output_to_chars(${1:Goal}, ${2:Codes}, ${3:Tail})$4\n$0", "description":"[det]with_output_to_chars(:Goal, -Codes, ?Tail).\nRun Goal as with once/1. Output written to current_output is collected in Codes\\Tail.", "prefix":"with_output_to_chars" }, "charsio:with_output_to_chars/4": { "body":"with_output_to_chars(${1:Goal}, ${2:Stream}, ${3:Codes}, ${4:Tail})$5\n$0", "description":"[det]with_output_to_chars(:Goal, -Stream, -Codes, ?Tail).\nSame as with_output_to_chars/3 using an explicit stream. The difference list Codes\\Tail contains the character codes that Goal has written to Stream.", "prefix":"with_output_to_chars" }, "charsio:write_to_chars/2": { "body":"write_to_chars(${1:Term}, ${2:Codes})$3\n$0", "description":"write_to_chars(+Term, -Codes).\nWrite a term to a code list. True when Codes is a list of character codes written by write/1 on Term.", "prefix":"write_to_chars" }, "charsio:write_to_chars/3": { "body":"write_to_chars(${1:Term}, ${2:Codes}, ${3:Tail})$4\n$0", "description":"write_to_chars(+Term, -Codes, ?Tail).\nWrite a term to a code list. Codes\\Tail is a difference list of character codes produced by write/1 on Term.", "prefix":"write_to_chars" }, "chdir/1": { "body":"chdir(${1:Path})$2\n$0", "description":"chdir(+Path).\nCompatibility predicate. New code should use working_directory/2.", "prefix":"chdir" }, "check:check/0": { "body": ["check$1\n$0" ], "description":" check is det.\n\n Run all consistency checks defined by checker/2. Checks enabled by\n default are:\n\n * list_undefined/0 reports undefined predicates\n * list_trivial_fails/0 reports calls for which there is no\n matching clause.\n * list_redefined/0 reports predicates that have a local\n definition and a global definition. Note that these are\n *not* errors.\n * list_autoload/0 lists predicates that will be defined at\n runtime using the autoloader.", "prefix":"check" }, "check:checker/2": { "body":"checker(${1:Goal}, ${2:Message})$3\n$0", "description":"[multifile]checker(:Goal, +Message:text).\nRegister code validation routines. Each clause defines a Goal which performs a consistency check executed by check/0. Message is a short description of the check. For example, assuming the my_checks module defines a predicate list_format_mistakes/0: \n\n:- multifile check:checker/2.\ncheck:checker(my_checks:list_format_mistakes,\n \"errors with format/2 arguments\").\n\n The predicate is dynamic, so you can disable checks with retract/1. For example, to stop reporting redefined predicates: \n\n\n\nretract(check:checker(list_redefined,_)).\n\n \n\n", "prefix":"checker" }, "check:list_autoload/0": { "body": ["list_autoload$1\n$0" ], "description":" list_autoload is det.\n\n Report predicates that may be auto-loaded. These are predicates\n that are not defined, but will be loaded on demand if\n referenced.\n\n @tbd This predicate uses an older mechanism for finding\n undefined predicates. Should be synchronized with\n list undefined.\n @see autoload/0", "prefix":"list_autoload" }, "check:list_redefined/0": { "body":"list_redefined$1\n$0", "description":"list_redefined.\nLists predicates that are defined in the global module user as well as in a normal module; that is, predicates for which the local definition overrules the global default definition.", "prefix":"list_redefined" }, "check:list_strings/0": { "body": ["list_strings$1\n$0" ], "description":" list_strings is det.\n list_strings(+Options) is det.\n\n List strings that appear in clauses. This predicate is used to\n find portability issues for changing the Prolog flag\n =double_quotes= from =codes= to =string=, creating packed string\n objects. Warnings may be suppressed using the following\n multifile hooks:\n\n - string_predicate/1 to stop checking certain predicates\n - valid_string_goal/1 to tell the checker that a goal is\n safe.\n\n @see Prolog flag =double_quotes=.", "prefix":"list_strings" }, "check:list_strings/1": { "body":"list_strings(${1:Options})$2\n$0", "description":"[det]list_strings(+Options).\nList strings that appear in clauses. This predicate is used to find portability issues for changing the Prolog flag double_quotes from codes to string, creating packed string objects. Warnings may be suppressed using the following multifile hooks: \n\nstring_predicate/1 to stop checking certain predicates\nvalid_string_goal/1 to tell the checker that a goal is safe.\n\n See also: Prolog flag double_quotes.\n\n ", "prefix":"list_strings" }, "check:list_trivial_fails/0": { "body": ["list_trivial_fails$1\n$0" ], "description":" list_trivial_fails is det.\n list_trivial_fails(+Options) is det.\n\n List goals that trivially fail because there is no matching\n clause. Options:\n\n * module_class(+Classes)\n Process modules of the given Classes. The default for\n classes is =|[user]|=. For example, to include the\n libraries into the examination, use =|[user,library]|=.", "prefix":"list_trivial_fails" }, "check:list_trivial_fails/1": { "body":"list_trivial_fails(${1:Options})$2\n$0", "description":"[det]list_trivial_fails(+Options).\nList goals that trivially fail because there is no matching clause. Options: module_class(+Classes): Process modules of the given Classes. The default for classes is [user]. For example, to include the libraries into the examination, use [user,library].\n\n ", "prefix":"list_trivial_fails" }, "check:list_undefined/0": { "body": ["list_undefined$1\n$0" ], "description":" list_undefined is det.\n list_undefined(+Options) is det.\n\n Report undefined predicates. This predicate finds undefined\n predciates by decompiling and analyzing the body of all clauses.\n Options:\n\n * module_class(+Classes)\n Process modules of the given Classes. The default for\n classes is =|[user]|=. For example, to include the\n libraries into the examination, use =|[user,library]|=.\n\n @see gxref/0 provides a graphical cross-referencer.\n @see make/0 calls list_undefined/0", "prefix":"list_undefined" }, "check:list_undefined/1": { "body":"list_undefined(${1:Options})$2\n$0", "description":"[det]list_undefined(+Options).\nReport undefined predicates. This predicate finds undefined predciates by decompiling and analyzing the body of all clauses. Options: module_class(+Classes): Process modules of the given Classes. The default for classes is [user]. For example, to include the libraries into the examination, use [user,library].\n\n See also: - gxref/0 provides a graphical cross-referencer. - make/0 calls list_undefined/0\n\n ", "prefix":"list_undefined" }, "check:list_void_declarations/0": { "body": ["list_void_declarations$1\n$0" ], "description":" list_void_declarations is det.\n\n List predicates that have declared attributes, but no clauses.", "prefix":"list_void_declarations" }, "check:string_predicate/1": { "body":"string_predicate(${1:PredicateIndicator})$2\n$0", "description":"[multifile]string_predicate(:PredicateIndicator).\nMultifile hook to disable list_strings/0 on the given predicate. This is typically used for facts that store strings.", "prefix":"string_predicate" }, "check:trivial_fail_goal/1": { "body":"trivial_fail_goal(${1:Goal})$2\n$0", "description":"[multifile]trivial_fail_goal(:Goal).\nMultifile hook that tells list_trivial_fails/0 to accept Goal as valid.", "prefix":"trivial_fail_goal" }, "check:valid_string_goal/1": { "body":"valid_string_goal(${1:Goal})$2\n$0", "description":"[semidet,multifile]valid_string_goal(+Goal).\nMultifile hook that qualifies Goal as valid for list_strings/0. For example, format(\"Hello world~n\") is considered proper use of string constants.", "prefix":"valid_string_goal" }, "check_installation:check_installation/0": { "body": ["check_installation$1\n$0" ], "description":" check_installation\n\n Check features of the installed system. Performs the following\n tests:\n\n 1. Test whether features that depend on optional libraries\n are present (e.g., unbounded arithmetic support)\n 2. Test that all standard libraries that depend on foreign\n code are present.\n\n If issues are found it prints a diagnostic message with a link\n to a wiki page with additional information about the issue.", "prefix":"check_installation" }, "check_installation:check_installation/1": { "body": ["check_installation(${1:'Param1'})$2\n$0" ], "description":"check_installation('Param1')", "prefix":"check_installation" }, "checklast:check_old_last/0": { "body": ["check_old_last$1\n$0" ], "description":"check_old_last", "prefix":"check_old_last" }, "checkselect:check_old_select/0": { "body": ["check_old_select$1\n$0" ], "description":" check_old_select\n\n When compiling, print calls to select/3 that may use the wrong\n argument order. Upto version 3.3.x the argument order of select/3\n as\n\n select(+List, ?Element, ?RestList).\n\n Later versions use the compatible version\n\n select(?Element, +List, ?RestList).", "prefix":"check_old_select" }, "chr/a_star:a_star/4": { "body": [ "a_star(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"a_star('Param1','Param2','Param3','Param4')", "prefix":"a_star" }, "chr/binomialheap:delete_min_q/3": { "body": ["delete_min_q(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"delete_min_q('Param1','Param2','Param3')", "prefix":"delete_min_q" }, "chr/binomialheap:empty_q/1": { "body": ["empty_q(${1:'Param1'})$2\n$0" ], "description":"empty_q('Param1')", "prefix":"empty_q" }, "chr/binomialheap:find_min_q/2": { "body": ["find_min_q(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"find_min_q('Param1','Param2')", "prefix":"find_min_q" }, "chr/binomialheap:insert_list_q/3": { "body": ["insert_list_q(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"insert_list_q('Param1','Param2','Param3')", "prefix":"insert_list_q" }, "chr/binomialheap:insert_q/3": { "body": ["insert_q(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"insert_q('Param1','Param2','Param3')", "prefix":"insert_q" }, "chr/builtins:binds_b/2": { "body": ["binds_b(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"binds_b('Param1','Param2')", "prefix":"binds_b" }, "chr/builtins:builtin_binds_b/2": { "body": ["builtin_binds_b(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"builtin_binds_b('Param1','Param2')", "prefix":"builtin_binds_b" }, "chr/builtins:entails_b/2": { "body": ["entails_b(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"entails_b('Param1','Param2')", "prefix":"entails_b" }, "chr/builtins:negate_b/2": { "body": ["negate_b(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"negate_b('Param1','Param2')", "prefix":"negate_b" }, "chr/chr_compiler_errors:chr_error/3": { "body": ["chr_error(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"chr_error('Param1','Param2','Param3')", "prefix":"chr_error" }, "chr/chr_compiler_errors:chr_info/3": { "body": ["chr_info(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"chr_info('Param1','Param2','Param3')", "prefix":"chr_info" }, "chr/chr_compiler_errors:chr_warning/3": { "body": ["chr_warning(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"chr_warning('Param1','Param2','Param3')", "prefix":"chr_warning" }, "chr/chr_compiler_errors:print_chr_error/1": { "body": ["print_chr_error(${1:'Param1'})$2\n$0" ], "description":"print_chr_error('Param1')", "prefix":"print_chr_error" }, "chr/chr_compiler_options:chr_pp_flag/2": { "body": ["chr_pp_flag(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"chr_pp_flag('Param1','Param2')", "prefix":"chr_pp_flag" }, "chr/chr_compiler_options:handle_option/2": { "body": ["handle_option(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"handle_option('Param1','Param2')", "prefix":"handle_option" }, "chr/chr_compiler_options:init_chr_pp_flags/0": { "body": ["init_chr_pp_flags$1\n$0" ], "description":"init_chr_pp_flags", "prefix":"init_chr_pp_flags" }, "chr/chr_compiler_utility:arg1/3": { "body": ["arg1(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"arg1('Param1','Param2','Param3')", "prefix":"arg1" }, "chr/chr_compiler_utility:atom_concat_list/2": { "body": ["atom_concat_list(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"atom_concat_list('Param1','Param2')", "prefix":"atom_concat_list" }, "chr/chr_compiler_utility:conj2list/2": { "body": ["conj2list(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"conj2list('Param1','Param2')", "prefix":"conj2list" }, "chr/chr_compiler_utility:copy_with_variable_replacement/3": { "body": [ "copy_with_variable_replacement(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"copy_with_variable_replacement('Param1','Param2','Param3')", "prefix":"copy_with_variable_replacement" }, "chr/chr_compiler_utility:disj2list/2": { "body": ["disj2list(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"disj2list('Param1','Param2')", "prefix":"disj2list" }, "chr/chr_compiler_utility:fold/4": { "body": [ "fold(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"fold('Param1','Param2','Param3','Param4')", "prefix":"fold" }, "chr/chr_compiler_utility:fold1/3": { "body": ["fold1(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"fold1('Param1','Param2','Param3')", "prefix":"fold1" }, "chr/chr_compiler_utility:identical_guarded_rules/2": { "body": ["identical_guarded_rules(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"identical_guarded_rules('Param1','Param2')", "prefix":"identical_guarded_rules" }, "chr/chr_compiler_utility:identical_rules/2": { "body": ["identical_rules(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"identical_rules('Param1','Param2')", "prefix":"identical_rules" }, "chr/chr_compiler_utility:init/2": { "body": ["init(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"init('Param1','Param2')", "prefix":"init" }, "chr/chr_compiler_utility:instrument_goal/4": { "body": [ "instrument_goal(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"instrument_goal('Param1','Param2','Param3','Param4')", "prefix":"instrument_goal" }, "chr/chr_compiler_utility:list2conj/2": { "body": ["list2conj(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"list2conj('Param1','Param2')", "prefix":"list2conj" }, "chr/chr_compiler_utility:list2disj/2": { "body": ["list2disj(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"list2disj('Param1','Param2')", "prefix":"list2disj" }, "chr/chr_compiler_utility:maplist_dcg/5": { "body": [ "maplist_dcg(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'})$6\n$0" ], "description":"maplist_dcg('Param1','Param2','Param3','Param4','Param5')", "prefix":"maplist_dcg" }, "chr/chr_compiler_utility:maplist_dcg/6": { "body": [ "maplist_dcg(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'}, ${6:'Param6'})$7\n$0" ], "description":"maplist_dcg('Param1','Param2','Param3','Param4','Param5','Param6')", "prefix":"maplist_dcg" }, "chr/chr_compiler_utility:member2/3": { "body": ["member2(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"member2('Param1','Param2','Param3')", "prefix":"member2" }, "chr/chr_compiler_utility:my_term_copy/3": { "body": ["my_term_copy(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"my_term_copy('Param1','Param2','Param3')", "prefix":"my_term_copy" }, "chr/chr_compiler_utility:my_term_copy/4": { "body": [ "my_term_copy(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"my_term_copy('Param1','Param2','Param3','Param4')", "prefix":"my_term_copy" }, "chr/chr_compiler_utility:pair_all_with/3": { "body": ["pair_all_with(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"pair_all_with('Param1','Param2','Param3')", "prefix":"pair_all_with" }, "chr/chr_compiler_utility:replicate/3": { "body": ["replicate(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"replicate('Param1','Param2','Param3')", "prefix":"replicate" }, "chr/chr_compiler_utility:select2/6": { "body": [ "select2(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'}, ${6:'Param6'})$7\n$0" ], "description":"select2('Param1','Param2','Param3','Param4','Param5','Param6')", "prefix":"select2" }, "chr/chr_compiler_utility:set_elems/2": { "body": ["set_elems(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"set_elems('Param1','Param2')", "prefix":"set_elems" }, "chr/chr_compiler_utility:sort_by_key/3": { "body": ["sort_by_key(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"sort_by_key('Param1','Param2','Param3')", "prefix":"sort_by_key" }, "chr/chr_compiler_utility:time/2": { "body": ["time(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"time('Param1','Param2')", "prefix":"time" }, "chr/chr_compiler_utility:tree_set_add/3": { "body": ["tree_set_add(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"tree_set_add('Param1','Param2','Param3')", "prefix":"tree_set_add" }, "chr/chr_compiler_utility:tree_set_empty/1": { "body": ["tree_set_empty(${1:'Param1'})$2\n$0" ], "description":"tree_set_empty('Param1')", "prefix":"tree_set_empty" }, "chr/chr_compiler_utility:tree_set_memberchk/2": { "body": ["tree_set_memberchk(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"tree_set_memberchk('Param1','Param2')", "prefix":"tree_set_memberchk" }, "chr/chr_compiler_utility:tree_set_merge/3": { "body": ["tree_set_merge(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"tree_set_merge('Param1','Param2','Param3')", "prefix":"tree_set_merge" }, "chr/chr_compiler_utility:variable_replacement/3": { "body": [ "variable_replacement(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"variable_replacement('Param1','Param2','Param3')", "prefix":"variable_replacement" }, "chr/chr_compiler_utility:variable_replacement/4": { "body": [ "variable_replacement(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"variable_replacement('Param1','Param2','Param3','Param4')", "prefix":"variable_replacement" }, "chr/chr_compiler_utility:wrap_in_functor/3": { "body": [ "wrap_in_functor(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"wrap_in_functor('Param1','Param2','Param3')", "prefix":"wrap_in_functor" }, "chr/chr_debug:chr_show_store/1": { "body": ["chr_show_store(${1:Module})$2\n$0" ], "description":"\tchr_show_store(+Module)\n\n\tPrints all suspended constraints of module Mod to the standard\n\toutput.", "prefix":"chr_show_store" }, "chr/chr_debug:find_chr_constraint/1": { "body": ["find_chr_constraint(${1:'Param1'})$2\n$0" ], "description":"find_chr_constraint('Param1')", "prefix":"find_chr_constraint" }, "chr/chr_find:find_with_var_identity/4": { "body": [ "find_with_var_identity(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"find_with_var_identity('Param1','Param2','Param3','Param4')", "prefix":"find_with_var_identity" }, "chr/chr_find:forall/3": { "body": ["forall(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"forall('Param1','Param2','Param3')", "prefix":"forall" }, "chr/chr_find:forsome/3": { "body": ["forsome(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"forsome('Param1','Param2','Param3')", "prefix":"forsome" }, "chr/chr_hashtable_store:delete_first_ht/3": { "body": [ "delete_first_ht(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"delete_first_ht('Param1','Param2','Param3')", "prefix":"delete_first_ht" }, "chr/chr_hashtable_store:delete_ht/3": { "body": ["delete_ht(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"delete_ht('Param1','Param2','Param3')", "prefix":"delete_ht" }, "chr/chr_hashtable_store:delete_ht1/4": { "body": [ "delete_ht1(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"delete_ht1('Param1','Param2','Param3','Param4')", "prefix":"delete_ht1" }, "chr/chr_hashtable_store:insert_ht/3": { "body": ["insert_ht(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"insert_ht('Param1','Param2','Param3')", "prefix":"insert_ht" }, "chr/chr_hashtable_store:insert_ht/4": { "body": [ "insert_ht(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"insert_ht('Param1','Param2','Param3','Param4')", "prefix":"insert_ht" }, "chr/chr_hashtable_store:insert_ht1/4": { "body": [ "insert_ht1(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"insert_ht1('Param1','Param2','Param3','Param4')", "prefix":"insert_ht1" }, "chr/chr_hashtable_store:lookup_ht/3": { "body": ["lookup_ht(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"lookup_ht('Param1','Param2','Param3')", "prefix":"lookup_ht" }, "chr/chr_hashtable_store:lookup_ht1/4": { "body": [ "lookup_ht1(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"lookup_ht1('Param1','Param2','Param3','Param4')", "prefix":"lookup_ht1" }, "chr/chr_hashtable_store:lookup_ht2/4": { "body": [ "lookup_ht2(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"lookup_ht2('Param1','Param2','Param3','Param4')", "prefix":"lookup_ht2" }, "chr/chr_hashtable_store:new_ht/1": { "body": ["new_ht(${1:'Param1'})$2\n$0" ], "description":"new_ht('Param1')", "prefix":"new_ht" }, "chr/chr_hashtable_store:stats_ht/1": { "body": ["stats_ht(${1:'Param1'})$2\n$0" ], "description":"stats_ht('Param1')", "prefix":"stats_ht" }, "chr/chr_hashtable_store:value_ht/2": { "body": ["value_ht(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"value_ht('Param1','Param2')", "prefix":"value_ht" }, "chr/chr_integertable_store:delete_iht/3": { "body": ["delete_iht(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"delete_iht('Param1','Param2','Param3')", "prefix":"delete_iht" }, "chr/chr_integertable_store:insert_iht/3": { "body": ["insert_iht(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"insert_iht('Param1','Param2','Param3')", "prefix":"insert_iht" }, "chr/chr_integertable_store:lookup_iht/3": { "body": ["lookup_iht(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"lookup_iht('Param1','Param2','Param3')", "prefix":"lookup_iht" }, "chr/chr_integertable_store:new_iht/1": { "body": ["new_iht(${1:'Param1'})$2\n$0" ], "description":"new_iht('Param1')", "prefix":"new_iht" }, "chr/chr_integertable_store:value_iht/2": { "body": ["value_iht(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"value_iht('Param1','Param2')", "prefix":"value_iht" }, "chr/chr_messages:chr_message/3": { "body": ["chr_message(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"chr_message('Param1','Param2','Param3')", "prefix":"chr_message" }, "chr/chr_runtime:chr_leash/1": { "body": ["chr_leash(${1:'Param1'})$2\n$0" ], "description":"chr_leash('Param1')", "prefix":"chr_leash" }, "chr/chr_runtime:chr_notrace/0": { "body": ["chr_notrace$1\n$0" ], "description":"chr_notrace", "prefix":"chr_notrace" }, "chr/chr_runtime:chr_show_store/1": { "body": ["chr_show_store(${1:'Param1'})$2\n$0" ], "description":"chr_show_store('Param1')", "prefix":"chr_show_store" }, "chr/chr_runtime:chr_trace/0": { "body": ["chr_trace$1\n$0" ], "description":"chr_trace", "prefix":"chr_trace" }, "chr/chr_runtime:current_chr_constraint/1": { "body": ["current_chr_constraint(${1:Constraint})$2\n$0" ], "description":"\tcurrent_chr_constraint(:Constraint) is nondet.\n\n\tTrue if Constraint is a constraint associated with the qualified\n\tmodule.", "prefix":"current_chr_constraint" }, "chr/chr_runtime:find_chr_constraint/1": { "body": ["find_chr_constraint(${1:Constraint})$2\n$0" ], "description":"\tfind_chr_constraint(-Constraint) is nondet.\n\n\tTrue when Constraint is a currently known constraint in any\n\tknown CHR module.\n\n\t@deprecated\tcurrent_chr_constraint/1 handles modules.", "prefix":"find_chr_constraint" }, "chr/chr_translate:chr_translate/2": { "body": ["chr_translate(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"chr_translate('Param1','Param2')", "prefix":"chr_translate" }, "chr/chr_translate:chr_translate_line_info/3": { "body": [ "chr_translate_line_info(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"chr_translate_line_info('Param1','Param2','Param3')", "prefix":"chr_translate_line_info" }, "chr/clean_code:clean_clauses/2": { "body": ["clean_clauses(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"clean_clauses('Param1','Param2')", "prefix":"clean_clauses" }, "chr/guard_entailment:entails_guard/2": { "body": ["entails_guard(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"entails_guard('Param1','Param2')", "prefix":"entails_guard" }, "chr/guard_entailment:simplify_guards/5": { "body": [ "simplify_guards(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'})$6\n$0" ], "description":"simplify_guards('Param1','Param2','Param3','Param4','Param5')", "prefix":"simplify_guards" }, "chr/listmap:listmap_empty/1": { "body": ["listmap_empty(${1:'Param1'})$2\n$0" ], "description":"listmap_empty('Param1')", "prefix":"listmap_empty" }, "chr/listmap:listmap_insert/4": { "body": [ "listmap_insert(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"listmap_insert('Param1','Param2','Param3','Param4')", "prefix":"listmap_insert" }, "chr/listmap:listmap_lookup/3": { "body": ["listmap_lookup(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"listmap_lookup('Param1','Param2','Param3')", "prefix":"listmap_lookup" }, "chr/listmap:listmap_merge/5": { "body": [ "listmap_merge(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'})$6\n$0" ], "description":"listmap_merge('Param1','Param2','Param3','Param4','Param5')", "prefix":"listmap_merge" }, "chr/listmap:listmap_remove/3": { "body": ["listmap_remove(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"listmap_remove('Param1','Param2','Param3')", "prefix":"listmap_remove" }, "chr/pairlist:fst_of_pairs/2": { "body": ["fst_of_pairs(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"fst_of_pairs('Param1','Param2')", "prefix":"fst_of_pairs" }, "chr/pairlist:lookup/3": { "body": ["lookup(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"lookup('Param1','Param2','Param3')", "prefix":"lookup" }, "chr/pairlist:lookup_any/3": { "body": ["lookup_any(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"lookup_any('Param1','Param2','Param3')", "prefix":"lookup_any" }, "chr/pairlist:lookup_any_eq/3": { "body": ["lookup_any_eq(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"lookup_any_eq('Param1','Param2','Param3')", "prefix":"lookup_any_eq" }, "chr/pairlist:lookup_eq/3": { "body": ["lookup_eq(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"lookup_eq('Param1','Param2','Param3')", "prefix":"lookup_eq" }, "chr/pairlist:pairlist_delete_eq/3": { "body": [ "pairlist_delete_eq(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"pairlist_delete_eq('Param1','Param2','Param3')", "prefix":"pairlist_delete_eq" }, "chr/pairlist:pairup/3": { "body": ["pairup(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"pairup('Param1','Param2','Param3')", "prefix":"pairup" }, "chr/pairlist:snd_of_pairs/2": { "body": ["snd_of_pairs(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"snd_of_pairs('Param1','Param2')", "prefix":"snd_of_pairs" }, "chr/pairlist:translate/3": { "body": ["translate(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"translate('Param1','Param2','Param3')", "prefix":"translate" }, "chr:chr_leash/1": { "body": ["chr_leash(${1:'Param1'})$2\n$0" ], "description":"chr_leash('Param1')", "prefix":"chr_leash" }, "chr:chr_notrace/0": { "body": ["chr_notrace$1\n$0" ], "description":"chr_notrace", "prefix":"chr_notrace" }, "chr:chr_show_store/1": { "body": ["chr_show_store(${1:'Param1'})$2\n$0" ], "description":"chr_show_store('Param1')", "prefix":"chr_show_store" }, "chr:chr_trace/0": { "body": ["chr_trace$1\n$0" ], "description":"chr_trace", "prefix":"chr_trace" }, "chr:find_chr_constraint/1": { "body": ["find_chr_constraint(${1:'Param1'})$2\n$0" ], "description":"find_chr_constraint('Param1')", "prefix":"find_chr_constraint" }, "chr_leash/1": { "body":"chr_leash(${1:Spec})$2\n$0", "description":"chr_leash(+Spec).\nDefine the set of CHR ports on which the CHR tracer asks for user intervention (i.e. stops). Spec is either a list of ports as defined in section 8.4.1 or a predefined `alias'. Defined aliases are: full to stop at all ports, none or off to never stop, and default to stop at the call, exit, fail, wake and apply ports. See also leash/1.", "prefix":"chr_leash" }, "chr_notrace/0": { "body":"chr_notrace$1\n$0", "description":"chr_notrace.\nDeactivate the CHR tracer. By default the CHR tracer is activated and deactivated automatically by the Prolog predicates trace/0 and notrace/0.", "prefix":"chr_notrace" }, "chr_show_store/1": { "body":"chr_show_store(${1:Mod})$2\n$0", "description":"chr_show_store(+Mod).\nPrints all suspended constraints of module Mod to the standard output. This predicate is automatically called by the SWI-Prolog top level at the end of each query for every CHR module currently loaded. The Prolog flag chr_toplevel_show_store controls whether the top level shows the constraint stores. The value true enables it. Any other value disables it.", "prefix":"chr_show_store" }, "chr_trace/0": { "body":"chr_trace$1\n$0", "description":"chr_trace.\nActivate the CHR tracer. By default the CHR tracer is activated and deactivated automatically by the Prolog predicates trace/0 and notrace/0.", "prefix":"chr_trace" }, "clause/2": { "body":"clause(${1:Head}, ${2:Body})$3\n$0", "description":"[ISO]clause(:Head, ?Body).\nTrue if Head can be unified with a clause head and Body with the corresponding clause body. Gives alternative clauses on backtracking. For facts, Body is unified with the atom true.", "prefix":"clause" }, "clause/3": { "body":"clause(${1:Head}, ${2:Body}, ${3:Reference})$4\n$0", "description":"clause(:Head, ?Body, ?Reference).\nEquivalent to clause/2, but unifies Reference with a unique reference to the clause (see also assert/2, erase/1). If Reference is instantiated to a reference the clause's head and body will be unified with Head and Body.", "prefix":"clause" }, "clause_property/2": { "body":"clause_property(${1:ClauseRef}, ${2:Property})$3\n$0", "description":"clause_property(+ClauseRef, -Property).\nQueries properties of a clause. ClauseRef is a reference to a clause as produced by clause/3, nth_clause/3 or prolog_frame_attribute/3. Unlike most other predicates that access clause references, clause_property/2 may be used to get information about erased clauses that have not yet been reclaimed. Property is one of the following: file(FileName): Unify FileName with the name of the file from which the clause is loaded. Fails if the clause was not created by loading a file (e.g., clauses added using assertz/1). See also source.\n\nline_count(LineNumber): Unify LineNumber with the line number of the clause. Fails if the clause is not associated to a file.\n\nsize(SizeInBytes): True when SizeInBytes is the size that the clause uses in memory in bytes. The size required by a predicate also includes the predicate data record, a linked list of clauses, clause selection instructions and optionally one or more clause indexes.\n\nsource(FileName): Unify FileName with the name of the source file that created the clause. This is the same as the file property, unless the file is loaded from a file that is textually included into source using include/1. In this scenario, file is the included file, while the source property refers to the main file.\n\nfact: True if the clause has no body.\n\nerased: True if the clause has been erased, but not yet reclaimed because it is referenced.\n\npredicate(PredicateIndicator): PredicateIndicator denotes the predicate to which this clause belongs. This is needed to obtain information on erased clauses because the usual way to obtain this information using clause/3 fails for erased clauses.\n\nmodule(Module): Module is the context module used to execute the body of the clause. For normal clauses, this is the same as the module in which the predicate is defined. However, if a clause is compiled with a module qualified head, the clause belongs to the predicate with the qualified head, while the body is executed in the context of the module in which the clause was defined.\n\n ", "prefix":"clause_property" }, "clib_rlimit:rlimit/3": { "body": ["rlimit(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"rlimit('Param1','Param2','Param3')", "prefix":"rlimit" }, "close/1": { "body":"close(${1:Stream})$2\n$0", "description":"[ISO]close(+Stream).\nClose the specified stream. If Stream is not open, an existence error is raised. See stream_pair/3 for the implications of closing a stream pair. If the closed stream is the current input, output or error stream, the stream alias is bound to the initial standard I/O streams of the process. Calling close/1 on the initial standard I/O streams of the process is a no-op for an input stream and flushes an output stream without closing it.79This behaviour was defined with purely interactive usage of Prolog in mind. Applications should not count on this behaviour. Future versions may allow for closing the initial standard I/O streams.\n\n", "prefix":"close" }, "close/2": { "body":"close(${1:Stream}, ${2:Options})$3\n$0", "description":"[ISO]close(+Stream, +Options).\nProvides close(Stream, [force(true)]) as the only option. Called this way, any resource errors (such as write errors while flushing the output buffer) are ignored.", "prefix":"close" }, "close_dde_conversation/1": { "body":"close_dde_conversation(${1:Handle})$2\n$0", "description":"close_dde_conversation(+Handle).\nClose the conversation associated with Handle. All opened conversations should be closed when they're no longer needed, although the system will close any that remain open on process termination.", "prefix":"close_dde_conversation" }, "close_table/1": { "body":"close_table(${1:Handle})$2\n$0", "description":"close_table(+Handle).\nClose the file and other system resources, but do not remove the description of the table, so it can be re-opened later.", "prefix":"close_table" }, "clp_distinct:all_distinct/1": { "body": ["all_distinct(${1:'Param1'})$2\n$0" ], "description":"all_distinct('Param1')", "prefix":"all_distinct" }, "clp_distinct:vars_in/2": { "body": ["vars_in(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"vars_in('Param1','Param2')", "prefix":"vars_in" }, "clp_distinct:vars_in/3": { "body": ["vars_in(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"vars_in('Param1','Param2','Param3')", "prefix":"vars_in" }, "clp_events:notify/2": { "body": ["notify(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"notify('Param1','Param2')", "prefix":"notify" }, "clp_events:subscribe/4": { "body": [ "subscribe(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"subscribe('Param1','Param2','Param3','Param4')", "prefix":"subscribe" }, "clp_events:unsubscribe/2": { "body": ["unsubscribe(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"unsubscribe('Param1','Param2')", "prefix":"unsubscribe" }, "clpb:labeling/1": { "body":"labeling(${1:Vs})$2\n$0", "description":"[multi]labeling(+Vs).\nEnumerate concrete solutions. Assigns truth values to the Boolean variables Vs such that all stated constraints are satisfied.", "prefix":"labeling" }, "clpb:random_labeling/2": { "body":"random_labeling(${1:Seed}, ${2:Vs})$3\n$0", "description":"[det]random_labeling(+Seed, +Vs).\nSelect a single random solution. An admissible assignment of truth values to the Boolean variables in Vs is chosen in such a way that each admissible assignment is equally likely. Seed is an integer, used as the initial seed for the random number generator.", "prefix":"random_labeling" }, "clpb:sat/1": { "body":"sat(${1:Expr})$2\n$0", "description":"[semidet]sat(+Expr).\nTrue iff Expr is a satisfiable Boolean expression.", "prefix":"sat" }, "clpb:sat_count/2": { "body":"sat_count(${1:Expr}, ${2:Count})$3\n$0", "description":"[det]sat_count(+Expr, -Count).\nCount the number of admissible assignments. Count is the number of different assignments of truth values to the variables in the Boolean expression Expr, such that Expr is true and all posted constraints are satisfiable. A common form of invocation is sat_count(+[1|Vs], Count): This counts the number of admissible assignments to Vs without imposing any further constraints. \n\nExamples: \n\n\n\n?- sat(A =< B), Vs = [A,B], sat_count(+[1|Vs], Count).\nVs = [A, B],\nCount = 3,\nsat(A=:=A*B).\n\n?- length(Vs, 120),\n sat_count(+Vs, CountOr),\n sat_count(*(Vs), CountAnd).\nVs = [...],\nCountOr = 1329227995784915872903807060280344575,\nCountAnd = 1.\n\n ", "prefix":"sat_count" }, "clpb:taut/2": { "body":"taut(${1:Expr}, ${2:T})$3\n$0", "description":"[semidet]taut(+Expr, -T).\nTautology check. Succeeds with T = 0 if the Boolean expression Expr cannot be satisfied, and with T = 1 if Expr is always true with respect to the current constraints. Fails otherwise.", "prefix":"taut" }, "clpb:weighted_maximum/3": { "body":"weighted_maximum(${1:Weights}, ${2:Vs}, ${3:Maximum})$4\n$0", "description":"[multi]weighted_maximum(+Weights, +Vs, -Maximum).\nEnumerate weighted optima over admissible assignments. Maximize a linear objective function over Boolean variables Vs with integer coefficients Weights. This predicate assigns 0 and 1 to the variables in Vs such that all stated constraints are satisfied, and Maximum is the maximum of sum(Weight_i*V_i) over all admissible assignments. On backtracking, all admissible assignments that attain the optimum are generated. This predicate can also be used to minimize a linear Boolean program, since negative integers can appear in Weights. \n\nExample: \n\n\n\n?- sat(A#B), weighted_maximum([1,2,1], [A,B,C], Maximum).\nA = 0, B = 1, C = 1, Maximum = 3.\n\n ", "prefix":"weighted_maximum" }, "clpfd:all_different/1": { "body":"all_different(${1:Vars})$2\n$0", "description":"all_different(+Vars).\nLike all_distinct/1, but with weaker propagation. Consider using all_distinct/1 instead, since all_distinct/1 is typically acceptably efficient and propagates much more strongly.", "prefix":"all_different" }, "clpfd:all_distinct/1": { "body":"all_distinct(${1:Vars})$2\n$0", "description":"all_distinct(+Vars).\nTrue iff Vars are pairwise distinct. For example, all_distinct/1 can detect that not all variables can assume distinct values given the following domains: \n\n?- maplist(in, Vs,\n [1\\/3..4, 1..2\\/4, 1..2\\/4, 1..3, 1..3, 1..6]),\n all_distinct(Vs).\nfalse.\n\n ", "prefix":"all_distinct" }, "clpfd:automaton/3": { "body":"automaton(${1:Vs}, ${2:Nodes}, ${3:Arcs})$4\n$0", "description":"automaton(+Vs, +Nodes, +Arcs).\nDescribes a list of finite domain variables with a finite automaton. Equivalent to automaton(Vs, _, Vs, Nodes, Arcs, [], [], _), a common use case of automaton/8. In the following example, a list of binary finite domain variables is constrained to contain at least two consecutive ones: \n\ntwo_consecutive_ones(Vs) :-\n automaton(Vs, [source(a),sink(c)],\n [arc(a,0,a), arc(a,1,b),\n arc(b,0,a), arc(b,1,c),\n arc(c,0,c), arc(c,1,c)]).\n\n Example query: \n\n\n\n?- length(Vs, 3), two_consecutive_ones(Vs), label(Vs).\nVs = [0, 1, 1] ;\nVs = [1, 1, 0] ;\nVs = [1, 1, 1].\n\n ", "prefix":"automaton" }, "clpfd:automaton/8": { "body":"automaton(${1:Sequence}, ${2:Template}, ${3:Signature}, ${4:Nodes}, ${5:Arcs}, ${6:Counters}, ${7:Initials}, ${8:Finals})$9\n$0", "description":"automaton(+Sequence, ?Template, +Signature, +Nodes, +Arcs, +Counters, +Initials, ?Finals).\nDescribes a list of finite domain variables with a finite automaton. True iff the finite automaton induced by Nodes and Arcs (extended with Counters) accepts Signature. Sequence is a list of terms, all of the same shape. Additional constraints must link Sequence to Signature, if necessary. Nodes is a list of source(Node) and sink(Node) terms. Arcs is a list of arc(Node,Integer,Node) and arc(Node,Integer,Node,Exprs) terms that denote the automaton's transitions. Each node is represented by an arbitrary term. Transitions that are not mentioned go to an implicit failure node. Exprs is a list of arithmetic expressions, of the same length as Counters. In each expression, variables occurring in Counters symbolically refer to previous counter values, and variables occurring in Template refer to the current element of Sequence. When a transition containing arithmetic expressions is taken, each counter is updated according to the result of the corresponding expression. When a transition without arithmetic expressions is taken, all counters remain unchanged. Counters is a list of variables. Initials is a list of finite domain variables or integers denoting, in the same order, the initial value of each counter. These values are related to Finals according to the arithmetic expressions of the taken transitions. The following example is taken from Beldiceanu, Carlsson, Debruyne and Petit: \"Reformulation of Global Constraints Based on Constraints Checkers\", Constraints 10(4), pp 339-362 (2005). It relates a sequence of integers and finite domain variables to its number of inflexions, which are switches between strictly ascending and strictly descending subsequences: \n\n\n\nsequence_inflexions(Vs, N) :-\n variables_signature(Vs, Sigs),\n automaton(Sigs, _, Sigs,\n [source(s),sink(i),sink(j),sink(s)],\n [arc(s,0,s), arc(s,1,j), arc(s,2,i),\n arc(i,0,i), arc(i,1,j,[C+1]), arc(i,2,i),\n arc(j,0,j), arc(j,1,j),\n arc(j,2,i,[C+1])],\n [C], [0], [N]).\n\nvariables_signature([], []).\nvariables_signature([V|Vs], Sigs) :-\n variables_signature_(Vs, V, Sigs).\n\nvariables_signature_([], _, []).\nvariables_signature_([V|Vs], Prev, [S|Sigs]) :-\n V #= Prev #<==> S #= 0,\n Prev #< V #<==> S #= 1,\n Prev #> V #<==> S #= 2,\n variables_signature_(Vs, V, Sigs).\n\n Example queries: \n\n\n\n?- sequence_inflexions([1,2,3,3,2,1,3,0], N).\nN = 3.\n\n?- length(Ls, 5), Ls ins 0..1,\n sequence_inflexions(Ls, 3), label(Ls).\nLs = [0, 1, 0, 1, 0] ;\nLs = [1, 0, 1, 0, 1].\n\n ", "prefix":"automaton" }, "clpfd:chain/2": { "body":"chain(${1:Zs}, ${2:Relation})$3\n$0", "description":"chain(+Zs, +Relation).\nZs form a chain with respect to Relation. Zs is a list of finite domain variables that are a chain with respect to the partial order Relation, in the order they appear in the list. Relation must be #=, #=<, #>=, #< or #>. For example: \n\n?- chain([X,Y,Z], #>=).\nX#>=Y,\nY#>=Z.\n\n \n\n", "prefix":"chain" }, "clpfd:circuit/1": { "body":"circuit(${1:Vs})$2\n$0", "description":"circuit(+Vs).\nTrue iff the list Vs of finite domain variables induces a Hamiltonian circuit. The k-th element of Vs denotes the successor of node k. Node indexing starts with 1. Examples: \n\n?- length(Vs, _), circuit(Vs), label(Vs).\nVs = [] ;\nVs = [1] ;\nVs = [2, 1] ;\nVs = [2, 3, 1] ;\nVs = [3, 1, 2] ;\nVs = [2, 3, 4, 1] .\n\n ", "prefix":"circuit" }, "clpfd:cumulative/1": { "body":"cumulative(${1:Tasks})$2\n$0", "description":"cumulative(+Tasks).\nEquivalent to cumulative(Tasks, [limit(1)]). See cumulative/2.", "prefix":"cumulative" }, "clpfd:cumulative/2": { "body":"cumulative(${1:Tasks}, ${2:Options})$3\n$0", "description":"cumulative(+Tasks, +Options).\nSchedule with a limited resource. Tasks is a list of tasks, each of the form task(S_i, D_i, E_i, C_i, T_i). S_i denotes the start time, D_i the positive duration, E_i the end time, C_i the non-negative resource consumption, and T_i the task identifier. Each of these arguments must be a finite domain variable with bounded domain, or an integer. The constraint holds iff at each time slot during the start and end of each task, the total resource consumption of all tasks running at that time does not exceed the global resource limit. Options is a list of options. Currently, the only supported option is: limit(L): The integer L is the global resource limit. Default is 1.\n\n For example, given the following predicate that relates three tasks of durations 2 and 3 to a list containing their starting times: \n\n\n\ntasks_starts(Tasks, [S1,S2,S3]) :-\n Tasks = [task(S1,3,_,1,_),\n task(S2,2,_,1,_),\n task(S3,2,_,1,_)].\n\n We can use cumulative/2 as follows, and obtain a schedule: \n\n\n\n?- tasks_starts(Tasks, Starts), Starts ins 0..10,\n cumulative(Tasks, [limit(2)]), label(Starts).\nTasks = [task(0, 3, 3, 1, _G36), task(0, 2, 2, 1, _G45), ...],\nStarts = [0, 0, 2] .\n\n ", "prefix":"cumulative" }, "clpfd:disjoint2/1": { "body":"disjoint2(${1:Rectangles})$2\n$0", "description":"disjoint2(+Rectangles).\nTrue iff Rectangles are not overlapping. Rectangles is a list of terms of the form F(X_i, W_i, Y_i, H_i), where F is any functor, and the arguments are finite domain variables or integers that denote, respectively, the X coordinate, width, Y coordinate and height of each rectangle.", "prefix":"disjoint2" }, "clpfd:element/3": { "body":"element(${1:N}, ${2:Vs}, ${3:V})$4\n$0", "description":"element(?N, +Vs, ?V).\nThe N-th element of the list of finite domain variables Vs is V. Analogous to nth1/3.", "prefix":"element" }, "clpfd:fd_dom/2": { "body":"fd_dom(${1:Var}, ${2:Dom})$3\n$0", "description":"fd_dom(+Var, -Dom).\nDom is the current domain (see in/2) of Var. This predicate is useful if you want to reason about domains. It is not needed if you only want to display remaining domains; instead, separate your model from the search part and let the toplevel display this information via residual goals. For example, to implement a custom labeling strategy, you may need to inspect the current domain of a finite domain variable. With the following code, you can convert a finite domain to a list of integers: \n\n\n\ndom_integers(D, Is) :- phrase(dom_integers_(D), Is).\n\ndom_integers_(I) --> { integer(I) }, [I].\ndom_integers_(L..U) --> { numlist(L, U, Is) }, Is.\ndom_integers_(D1\\/D2) --> dom_integers_(D1), dom_integers_(D2).\n\n Example: \n\n\n\n?- X in 1..5, X #\\= 4, fd_dom(X, D), dom_integers(D, Is).\nD = 1..3\\/5,\nIs = [1,2,3,5],\nX in 1..3\\/5.\n\n \n\n", "prefix":"fd_dom" }, "clpfd:fd_inf/2": { "body":"fd_inf(${1:Var}, ${2:Inf})$3\n$0", "description":"fd_inf(+Var, -Inf).\nInf is the infimum of the current domain of Var.", "prefix":"fd_inf" }, "clpfd:fd_size/2": { "body":"fd_size(${1:Var}, ${2:Size})$3\n$0", "description":"fd_size(+Var, -Size).\nReflect the current size of a domain. Size is the number of elements of the current domain of Var, or the atom sup if the domain is unbounded.", "prefix":"fd_size" }, "clpfd:fd_sup/2": { "body":"fd_sup(${1:Var}, ${2:Sup})$3\n$0", "description":"fd_sup(+Var, -Sup).\nSup is the supremum of the current domain of Var.", "prefix":"fd_sup" }, "clpfd:fd_var/1": { "body":"fd_var(${1:Var})$2\n$0", "description":"fd_var(+Var).\nTrue iff Var is a CLP(FD) variable.", "prefix":"fd_var" }, "clpfd:global_cardinality/2": { "body":"global_cardinality(${1:Vs}, ${2:Pairs})$3\n$0", "description":"global_cardinality(+Vs, +Pairs).\nGlobal Cardinality constraint. Equivalent to global_cardinality(Vs, Pairs, []). See global_cardinality/3. Example: \n\n\n\n?- Vs = [_,_,_], global_cardinality(Vs, [1-2,3-_]), label(Vs).\nVs = [1, 1, 3] ;\nVs = [1, 3, 1] ;\nVs = [3, 1, 1].\n\n ", "prefix":"global_cardinality" }, "clpfd:global_cardinality/3": { "body":"global_cardinality(${1:Vs}, ${2:Pairs}, ${3:Options})$4\n$0", "description":"global_cardinality(+Vs, +Pairs, +Options).\nGlobal Cardinality constraint. Vs is a list of finite domain variables, Pairs is a list of Key-Num pairs, where Key is an integer and Num is a finite domain variable. The constraint holds iff each V in Vs is equal to some key, and for each Key-Num pair in Pairs, the number of occurrences of Key in Vs is Num. Options is a list of options. Supported options are: consistency(value): A weaker form of consistency is used.\n\ncost(Cost, Matrix): Matrix is a list of rows, one for each variable, in the order they occur in Vs. Each of these rows is a list of integers, one for each key, in the order these keys occur in Pairs. When variable v_i is assigned the value of key k_j, then the associated cost is Matrix_{ij}. Cost is the sum of all costs.\n\n ", "prefix":"global_cardinality" }, "clpfd:in/2": { "body": ["in(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"in('Param1','Param2')", "prefix":"in" }, "clpfd:indomain/1": { "body":"indomain(${1:Var})$2\n$0", "description":"indomain(?Var).\nBind Var to all feasible values of its domain on backtracking. The domain of Var must be finite.", "prefix":"indomain" }, "clpfd:ins/2": { "body": ["ins(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"ins('Param1','Param2')", "prefix":"ins" }, "clpfd:label/1": { "body":"label(${1:Vars})$2\n$0", "description":"label(+Vars).\nEquivalent to labeling([], Vars). See labeling/2.", "prefix":"label" }, "clpfd:labeling/2": { "body":"labeling(${1:Options}, ${2:Vars})$3\n$0", "description":"labeling(+Options, +Vars).\nAssign a value to each variable in Vars. Labeling means systematically trying out values for the finite domain variables Vars until all of them are ground. The domain of each variable in Vars must be finite. Options is a list of options that let you exhibit some control over the search process. Several categories of options exist: The variable selection strategy lets you specify which variable of Vars is labeled next and is one of: \n\nleftmost: Label the variables in the order they occur in Vars. This is the default.\n\nff: First fail. Label the leftmost variable with smallest domain next, in order to detect infeasibility early. This is often a good strategy.\n\nffc: Of the variables with smallest domains, the leftmost one participating in most constraints is labeled next.\n\nmin: Label the leftmost variable whose lower bound is the lowest next.\n\nmax: Label the leftmost variable whose upper bound is the highest next.\n\n The value order is one of: \n\nup: Try the elements of the chosen variable's domain in ascending order. This is the default.\n\ndown: Try the domain elements in descending order.\n\n The branching strategy is one of: \n\nstep: For each variable X, a choice is made between X = V and X #\\= V, where V is determined by the value ordering options. This is the default.\n\nenum: For each variable X, a choice is made between X = V_1, X = V_2 etc., for all values V_i of the domain of X. The order is determined by the value ordering options.\n\nbisect: For each variable X, a choice is made between X #=< M and X #> M, where M is the midpoint of the domain of X.\n\n At most one option of each category can be specified, and an option must not occur repeatedly. \n\nThe order of solutions can be influenced with: \n\n\n\nmin(Expr)\nmax(Expr)\n\n This generates solutions in ascending/descending order with respect to the evaluation of the arithmetic expression Expr. Labeling Vars must make Expr ground. If several such options are specified, they are interpreted from left to right, e.g.: \n\n\n\n?- [X,Y] ins 10..20, labeling([max(X),min(Y)],[X,Y]).\n\n This generates solutions in descending order of X, and for each binding of X, solutions are generated in ascending order of Y. To obtain the incomplete behaviour that other systems exhibit with \"maximize(Expr)\" and \"minimize(Expr)\", use once/1, e.g.: \n\n\n\nonce(labeling([max(Expr)], Vars))\n\n Labeling is always complete, always terminates, and yields no redundant solutions. See core relations and search (section A.8.9) for usage advice.\n\n", "prefix":"labeling" }, "clpfd:lex_chain/1": { "body":"lex_chain(${1:Lists})$2\n$0", "description":"lex_chain(+Lists).\nLists are lexicographically non-decreasing.", "prefix":"lex_chain" }, "clpfd:scalar_product/4": { "body":"scalar_product(${1:Cs}, ${2:Vs}, ${3:Rel}, ${4:Expr})$5\n$0", "description":"scalar_product(+Cs, +Vs, +Rel, ?Expr).\nTrue iff the scalar product of Cs and Vs is in relation Rel to Expr. Cs is a list of integers, Vs is a list of variables and integers. Rel is #=, #\\=, #<, #>, #=< or #>=.", "prefix":"scalar_product" }, "clpfd:serialized/2": { "body":"serialized(${1:Starts}, ${2:Durations})$3\n$0", "description":"serialized(+Starts, +Durations).\nDescribes a set of non-overlapping tasks. Starts = [S_1,...,S_n], is a list of variables or integers, Durations = [D_1,...,D_n] is a list of non-negative integers. Constrains Starts and Durations to denote a set of non-overlapping tasks, i.e.: S_i + D_i =< S_j or S_j + D_j =< S_i for all 1 =< i < j =< n. Example: \n\n?- length(Vs, 3),\n Vs ins 0..3,\n serialized(Vs, [1,2,3]),\n label(Vs).\nVs = [0, 1, 3] ;\nVs = [2, 0, 3] ;\nfalse.\n\n See also: Dorndorf et al. 2000, \"Constraint Propagation Techniques for the Disjunctive Scheduling Problem\"\n\n ", "prefix":"serialized" }, "clpfd:sum/3": { "body":"sum(${1:Vars}, ${2:Rel}, ${3:Expr})$4\n$0", "description":"sum(+Vars, +Rel, ?Expr).\nThe sum of elements of the list Vars is in relation Rel to Expr. Rel is one of #=, #\\=, #<, #>, #=< or #>=. For example: \n\n?- [A,B,C] ins 0..sup, sum([A,B,C], #=, 100).\nA in 0..100,\nA+B+C#=100,\nB in 0..100,\nC in 0..100.\n\n ", "prefix":"sum" }, "clpfd:transpose/2": { "body": ["transpose(${1:Matrix}, ${2:Transpose})$3\n$0" ], "description":" transpose(+Matrix, ?Transpose)\n\n Transpose a list of lists of the same length. Example:\n\n ==\n ?- transpose([[1,2,3],[4,5,6],[7,8,9]], Ts).\n Ts = [[1, 4, 7], [2, 5, 8], [3, 6, 9]].\n ==\n\n This predicate is useful in many constraint programs. Consider for\n instance Sudoku:\n\n ==\n sudoku(Rows) :-\n length(Rows, 9), maplist(same_length(Rows), Rows),\n append(Rows, Vs), Vs ins 1..9,\n maplist(all_distinct, Rows),\n transpose(Rows, Columns),\n maplist(all_distinct, Columns),\n Rows = [As,Bs,Cs,Ds,Es,Fs,Gs,Hs,Is],\n blocks(As, Bs, Cs), blocks(Ds, Es, Fs), blocks(Gs, Hs, Is).\n\n blocks([], [], []).\n blocks([N1,N2,N3|Ns1], [N4,N5,N6|Ns2], [N7,N8,N9|Ns3]) :-\n all_distinct([N1,N2,N3,N4,N5,N6,N7,N8,N9]),\n blocks(Ns1, Ns2, Ns3).\n\n problem(1, [[_,_,_,_,_,_,_,_,_],\n [_,_,_,_,_,3,_,8,5],\n [_,_,1,_,2,_,_,_,_],\n [_,_,_,5,_,7,_,_,_],\n [_,_,4,_,_,_,1,_,_],\n [_,9,_,_,_,_,_,_,_],\n [5,_,_,_,_,_,_,7,3],\n [_,_,2,_,1,_,_,_,_],\n [_,_,_,_,4,_,_,_,9]]).\n ==\n\n Sample query:\n\n ==\n ?- problem(1, Rows), sudoku(Rows), maplist(writeln, Rows).\n [9,8,7,6,5,4,3,2,1]\n [2,4,6,1,7,3,9,8,5]\n [3,5,1,9,2,8,7,4,6]\n [1,2,8,5,3,7,6,9,4]\n [6,3,4,8,9,2,1,5,7]\n [7,9,5,4,6,1,8,3,2]\n [5,1,9,2,8,6,4,7,3]\n [4,7,2,3,1,9,5,6,8]\n [8,6,3,7,4,5,2,1,9]\n Rows = [[9, 8, 7, 6, 5, 4, 3, 2|...], ... , [...|...]].\n ==", "prefix":"transpose" }, "clpfd:tuples_in/2": { "body":"tuples_in(${1:Tuples}, ${2:Relation})$3\n$0", "description":"tuples_in(+Tuples, +Relation).\nTrue iff all Tuples are elements of Relation. Each element of the list Tuples is a list of integers or finite domain variables. Relation is a list of lists of integers. Arbitrary finite relations, such as compatibility tables, can be modeled in this way. For example, if 1 is compatible with 2 and 5, and 4 is compatible with 0 and 3: \n\n?- tuples_in([[X,Y]], [[1,2],[1,5],[4,0],[4,3]]), X = 4.\nX = 4,\nY in 0\\/3.\n\n As another example, consider a train schedule represented as a list of quadruples, denoting departure and arrival places and times for each train. In the following program, Ps is a feasible journey of length 3 from A to D via trains that are part of the given schedule. \n\n\n\ntrains([[1,2,0,1],\n [2,3,4,5],\n [2,3,0,1],\n [3,4,5,6],\n [3,4,2,3],\n [3,4,8,9]]).\n\nthreepath(A, D, Ps) :-\n Ps = [[A,B,_T0,T1],[B,C,T2,T3],[C,D,T4,_T5]],\n T2 #> T1,\n T4 #> T3,\n trains(Ts),\n tuples_in(Ps, Ts).\n\n In this example, the unique solution is found without labeling: \n\n\n\n?- threepath(1, 4, Ps).\nPs = [[1, 2, 0, 1], [2, 3, 4, 5], [3, 4, 8, 9]].\n\n ", "prefix":"tuples_in" }, "clpfd:zcompare/3": { "body":"zcompare(${1:Order}, ${2:A}, ${3:B})$4\n$0", "description":"zcompare(?Order, ?A, ?B).\nAnalogous to compare/3, with finite domain variables A and B. This predicate allows you to make several predicates over integers deterministic while preserving their generality and completeness. For example: \n\n\n\nn_factorial(N, F) :-\n zcompare(C, N, 0),\n n_factorial_(C, N, F).\n\nn_factorial_(=, _, 1).\nn_factorial_(>, N, F) :-\n F #= F0*N, N1 #= N - 1,\n n_factorial(N1, F0).\n\n This version is deterministic if the first argument is instantiated, because first argument indexing can distinguish the two different clauses: \n\n\n\n?- n_factorial(30, F).\nF = 265252859812191058636308480000000.\n\n The predicate can still be used in all directions, including the most general query: \n\n\n\n?- n_factorial(N, F).\nN = 0,\nF = 1 ;\nN = F, F = 1 ;\nN = F, F = 2 .\n\n \n\n", "prefix":"zcompare" }, "clpq/bb_q:bb_inf/3": { "body": ["bb_inf(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"bb_inf('Param1','Param2','Param3')", "prefix":"bb_inf" }, "clpq/bb_q:bb_inf/4": { "body": [ "bb_inf(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"bb_inf('Param1','Param2','Param3','Param4')", "prefix":"bb_inf" }, "clpq/bb_q:vertex_value/2": { "body": ["vertex_value(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"vertex_value('Param1','Param2')", "prefix":"vertex_value" }, "clpq/bv_q:allvars/2": { "body": ["allvars(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"allvars('Param1','Param2')", "prefix":"allvars" }, "clpq/bv_q:backsubst/3": { "body": ["backsubst(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"backsubst('Param1','Param2','Param3')", "prefix":"backsubst" }, "clpq/bv_q:backsubst_delta/4": { "body": [ "backsubst_delta(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"backsubst_delta('Param1','Param2','Param3','Param4')", "prefix":"backsubst_delta" }, "clpq/bv_q:basis_add/2": { "body": ["basis_add(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"basis_add('Param1','Param2')", "prefix":"basis_add" }, "clpq/bv_q:dec_step/2": { "body": ["dec_step(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"dec_step('Param1','Param2')", "prefix":"dec_step" }, "clpq/bv_q:deref/2": { "body": ["deref(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"deref('Param1','Param2')", "prefix":"deref" }, "clpq/bv_q:deref_var/2": { "body": ["deref_var(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"deref_var('Param1','Param2')", "prefix":"deref_var" }, "clpq/bv_q:detach_bounds/1": { "body": ["detach_bounds(${1:'Param1'})$2\n$0" ], "description":"detach_bounds('Param1')", "prefix":"detach_bounds" }, "clpq/bv_q:detach_bounds_vlv/5": { "body": [ "detach_bounds_vlv(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'})$6\n$0" ], "description":"detach_bounds_vlv('Param1','Param2','Param3','Param4','Param5')", "prefix":"detach_bounds_vlv" }, "clpq/bv_q:determine_active_dec/1": { "body": ["determine_active_dec(${1:'Param1'})$2\n$0" ], "description":"determine_active_dec('Param1')", "prefix":"determine_active_dec" }, "clpq/bv_q:determine_active_inc/1": { "body": ["determine_active_inc(${1:'Param1'})$2\n$0" ], "description":"determine_active_inc('Param1')", "prefix":"determine_active_inc" }, "clpq/bv_q:dump_nz/5": { "body": [ "dump_nz(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'})$6\n$0" ], "description":"dump_nz('Param1','Param2','Param3','Param4','Param5')", "prefix":"dump_nz" }, "clpq/bv_q:dump_var/6": { "body": [ "dump_var(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'}, ${6:'Param6'})$7\n$0" ], "description":"dump_var('Param1','Param2','Param3','Param4','Param5','Param6')", "prefix":"dump_var" }, "clpq/bv_q:export_binding/1": { "body": ["export_binding(${1:'Param1'})$2\n$0" ], "description":"export_binding('Param1')", "prefix":"export_binding" }, "clpq/bv_q:get_or_add_class/2": { "body": ["get_or_add_class(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"get_or_add_class('Param1','Param2')", "prefix":"get_or_add_class" }, "clpq/bv_q:inc_step/2": { "body": ["inc_step(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"inc_step('Param1','Param2')", "prefix":"inc_step" }, "clpq/bv_q:inf/2": { "body": ["inf(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"inf('Param1','Param2')", "prefix":"inf" }, "clpq/bv_q:inf/4": { "body": [ "inf(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"inf('Param1','Param2','Param3','Param4')", "prefix":"inf" }, "clpq/bv_q:intro_at/3": { "body": ["intro_at(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"intro_at('Param1','Param2','Param3')", "prefix":"intro_at" }, "clpq/bv_q:iterate_dec/2": { "body": ["iterate_dec(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"iterate_dec('Param1','Param2')", "prefix":"iterate_dec" }, "clpq/bv_q:lb/3": { "body": ["lb(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"lb('Param1','Param2','Param3')", "prefix":"lb" }, "clpq/bv_q:log_deref/4": { "body": [ "log_deref(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"log_deref('Param1','Param2','Param3','Param4')", "prefix":"log_deref" }, "clpq/bv_q:maximize/1": { "body": ["maximize(${1:'Param1'})$2\n$0" ], "description":"maximize('Param1')", "prefix":"maximize" }, "clpq/bv_q:minimize/1": { "body": ["minimize(${1:'Param1'})$2\n$0" ], "description":"minimize('Param1')", "prefix":"minimize" }, "clpq/bv_q:pivot/5": { "body": [ "pivot(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'})$6\n$0" ], "description":"pivot('Param1','Param2','Param3','Param4','Param5')", "prefix":"pivot" }, "clpq/bv_q:pivot_a/4": { "body": [ "pivot_a(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"pivot_a('Param1','Param2','Param3','Param4')", "prefix":"pivot_a" }, "clpq/bv_q:rcbl_status/6": { "body": [ "rcbl_status(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'}, ${6:'Param6'})$7\n$0" ], "description":"rcbl_status('Param1','Param2','Param3','Param4','Param5','Param6')", "prefix":"rcbl_status" }, "clpq/bv_q:reconsider/1": { "body": ["reconsider(${1:'Param1'})$2\n$0" ], "description":"reconsider('Param1')", "prefix":"reconsider" }, "clpq/bv_q:same_class/2": { "body": ["same_class(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"same_class('Param1','Param2')", "prefix":"same_class" }, "clpq/bv_q:solve/1": { "body": ["solve(${1:'Param1'})$2\n$0" ], "description":"solve('Param1')", "prefix":"solve" }, "clpq/bv_q:solve_ord_x/3": { "body": ["solve_ord_x(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"solve_ord_x('Param1','Param2','Param3')", "prefix":"solve_ord_x" }, "clpq/bv_q:sup/2": { "body": ["sup(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"sup('Param1','Param2')", "prefix":"sup" }, "clpq/bv_q:sup/4": { "body": [ "sup(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"sup('Param1','Param2','Param3','Param4')", "prefix":"sup" }, "clpq/bv_q:ub/3": { "body": ["ub(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"ub('Param1','Param2','Param3')", "prefix":"ub" }, "clpq/bv_q:unconstrained/4": { "body": [ "unconstrained(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"unconstrained('Param1','Param2','Param3','Param4')", "prefix":"unconstrained" }, "clpq/bv_q:var_intern/2": { "body": ["var_intern(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"var_intern('Param1','Param2')", "prefix":"var_intern" }, "clpq/bv_q:var_intern/3": { "body": ["var_intern(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"var_intern('Param1','Param2','Param3')", "prefix":"var_intern" }, "clpq/bv_q:var_with_def_assign/2": { "body": ["var_with_def_assign(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"var_with_def_assign('Param1','Param2')", "prefix":"var_with_def_assign" }, "clpq/bv_q:var_with_def_intern/4": { "body": [ "var_with_def_intern(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"var_with_def_intern('Param1','Param2','Param3','Param4')", "prefix":"var_with_def_intern" }, "clpq/fourmotz_q:fm_elim/3": { "body": ["fm_elim(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"fm_elim('Param1','Param2','Param3')", "prefix":"fm_elim" }, "clpq/ineq_q:ineq/4": { "body": [ "ineq(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"ineq('Param1','Param2','Param3','Param4')", "prefix":"ineq" }, "clpq/ineq_q:ineq_one/4": { "body": [ "ineq_one(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"ineq_one('Param1','Param2','Param3','Param4')", "prefix":"ineq_one" }, "clpq/ineq_q:ineq_one_n_n_0/1": { "body": ["ineq_one_n_n_0(${1:'Param1'})$2\n$0" ], "description":"ineq_one_n_n_0('Param1')", "prefix":"ineq_one_n_n_0" }, "clpq/ineq_q:ineq_one_n_p_0/1": { "body": ["ineq_one_n_p_0(${1:'Param1'})$2\n$0" ], "description":"ineq_one_n_p_0('Param1')", "prefix":"ineq_one_n_p_0" }, "clpq/ineq_q:ineq_one_s_n_0/1": { "body": ["ineq_one_s_n_0(${1:'Param1'})$2\n$0" ], "description":"ineq_one_s_n_0('Param1')", "prefix":"ineq_one_s_n_0" }, "clpq/ineq_q:ineq_one_s_p_0/1": { "body": ["ineq_one_s_p_0(${1:'Param1'})$2\n$0" ], "description":"ineq_one_s_p_0('Param1')", "prefix":"ineq_one_s_p_0" }, "clpq/itf_q:do_checks/8": { "body": [ "do_checks(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'}, ${6:'Param6'}, ${7:'Param7'}, ${8:'Param8'})$9\n$0" ], "description":"do_checks('Param1','Param2','Param3','Param4','Param5','Param6','Param7','Param8')", "prefix":"do_checks" }, "clpq/nf_q:entailed/1": { "body": ["entailed(${1:'Param1'})$2\n$0" ], "description":"entailed('Param1')", "prefix":"entailed" }, "clpq/nf_q:nf/2": { "body": ["nf(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"nf('Param1','Param2')", "prefix":"nf" }, "clpq/nf_q:nf2term/2": { "body": ["nf2term(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"nf2term('Param1','Param2')", "prefix":"nf2term" }, "clpq/nf_q:nf_constant/2": { "body": ["nf_constant(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"nf_constant('Param1','Param2')", "prefix":"nf_constant" }, "clpq/nf_q:repair/2": { "body": ["repair(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"repair('Param1','Param2')", "prefix":"repair" }, "clpq/nf_q:split/3": { "body": ["split(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"split('Param1','Param2','Param3')", "prefix":"split" }, "clpq/nf_q:wait_linear/3": { "body": ["wait_linear(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"wait_linear('Param1','Param2','Param3')", "prefix":"wait_linear" }, "clpq/store_q:add_linear_11/3": { "body": ["add_linear_11(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"add_linear_11('Param1','Param2','Param3')", "prefix":"add_linear_11" }, "clpq/store_q:add_linear_f1/4": { "body": [ "add_linear_f1(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"add_linear_f1('Param1','Param2','Param3','Param4')", "prefix":"add_linear_f1" }, "clpq/store_q:add_linear_ff/5": { "body": [ "add_linear_ff(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'})$6\n$0" ], "description":"add_linear_ff('Param1','Param2','Param3','Param4','Param5')", "prefix":"add_linear_ff" }, "clpq/store_q:delete_factor/4": { "body": [ "delete_factor(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"delete_factor('Param1','Param2','Param3','Param4')", "prefix":"delete_factor" }, "clpq/store_q:indep/2": { "body": ["indep(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"indep('Param1','Param2')", "prefix":"indep" }, "clpq/store_q:isolate/3": { "body": ["isolate(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"isolate('Param1','Param2','Param3')", "prefix":"isolate" }, "clpq/store_q:mult_hom/3": { "body": ["mult_hom(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"mult_hom('Param1','Param2','Param3')", "prefix":"mult_hom" }, "clpq/store_q:mult_linear_factor/3": { "body": [ "mult_linear_factor(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"mult_linear_factor('Param1','Param2','Param3')", "prefix":"mult_linear_factor" }, "clpq/store_q:nf2sum/3": { "body": ["nf2sum(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"nf2sum('Param1','Param2','Param3')", "prefix":"nf2sum" }, "clpq/store_q:nf_coeff_of/3": { "body": ["nf_coeff_of(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"nf_coeff_of('Param1','Param2','Param3')", "prefix":"nf_coeff_of" }, "clpq/store_q:nf_rhs_x/4": { "body": [ "nf_rhs_x(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"nf_rhs_x('Param1','Param2','Param3','Param4')", "prefix":"nf_rhs_x" }, "clpq/store_q:nf_substitute/4": { "body": [ "nf_substitute(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"nf_substitute('Param1','Param2','Param3','Param4')", "prefix":"nf_substitute" }, "clpq/store_q:normalize_scalar/2": { "body": ["normalize_scalar(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"normalize_scalar('Param1','Param2')", "prefix":"normalize_scalar" }, "clpq/store_q:renormalize/2": { "body": ["renormalize(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"renormalize('Param1','Param2')", "prefix":"renormalize" }, "clpq:bb_inf/3": { "body": ["bb_inf(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"bb_inf('Param1','Param2','Param3')", "prefix":"bb_inf" }, "clpq:bb_inf/4": { "body": [ "bb_inf(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"bb_inf('Param1','Param2','Param3','Param4')", "prefix":"bb_inf" }, "clpq:clp_type/2": { "body": ["clp_type(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"clp_type('Param1','Param2')", "prefix":"clp_type" }, "clpq:dump/3": { "body": ["dump(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"dump('Param1','Param2','Param3')", "prefix":"dump" }, "clpq:entailed/1": { "body": ["entailed(${1:'Param1'})$2\n$0" ], "description":"entailed('Param1')", "prefix":"entailed" }, "clpq:inf/2": { "body": ["inf(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"inf('Param1','Param2')", "prefix":"inf" }, "clpq:inf/4": { "body": [ "inf(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"inf('Param1','Param2','Param3','Param4')", "prefix":"inf" }, "clpq:maximize/1": { "body": ["maximize(${1:'Param1'})$2\n$0" ], "description":"maximize('Param1')", "prefix":"maximize" }, "clpq:minimize/1": { "body": ["minimize(${1:'Param1'})$2\n$0" ], "description":"minimize('Param1')", "prefix":"minimize" }, "clpq:ordering/1": { "body": ["ordering(${1:'Param1'})$2\n$0" ], "description":"ordering('Param1')", "prefix":"ordering" }, "clpq:sup/2": { "body": ["sup(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"sup('Param1','Param2')", "prefix":"sup" }, "clpq:sup/4": { "body": [ "sup(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"sup('Param1','Param2','Param3','Param4')", "prefix":"sup" }, "clpqr/class:arrangement/2": { "body": ["arrangement(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"arrangement('Param1','Param2')", "prefix":"arrangement" }, "clpqr/class:class_allvars/2": { "body": ["class_allvars(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"class_allvars('Param1','Param2')", "prefix":"class_allvars" }, "clpqr/class:class_basis/2": { "body": ["class_basis(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"class_basis('Param1','Param2')", "prefix":"class_basis" }, "clpqr/class:class_basis_add/3": { "body": [ "class_basis_add(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"class_basis_add('Param1','Param2','Param3')", "prefix":"class_basis_add" }, "clpqr/class:class_basis_drop/2": { "body": ["class_basis_drop(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"class_basis_drop('Param1','Param2')", "prefix":"class_basis_drop" }, "clpqr/class:class_basis_pivot/3": { "body": [ "class_basis_pivot(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"class_basis_pivot('Param1','Param2','Param3')", "prefix":"class_basis_pivot" }, "clpqr/class:class_drop/2": { "body": ["class_drop(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"class_drop('Param1','Param2')", "prefix":"class_drop" }, "clpqr/class:class_get_clp/2": { "body": ["class_get_clp(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"class_get_clp('Param1','Param2')", "prefix":"class_get_clp" }, "clpqr/class:class_get_prio/2": { "body": ["class_get_prio(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"class_get_prio('Param1','Param2')", "prefix":"class_get_prio" }, "clpqr/class:class_new/5": { "body": [ "class_new(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'})$6\n$0" ], "description":"class_new('Param1','Param2','Param3','Param4','Param5')", "prefix":"class_new" }, "clpqr/class:class_put_prio/2": { "body": ["class_put_prio(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"class_put_prio('Param1','Param2')", "prefix":"class_put_prio" }, "clpqr/class:ordering/1": { "body": ["ordering(${1:'Param1'})$2\n$0" ], "description":"ordering('Param1')", "prefix":"ordering" }, "clpqr/dump:dump/3": { "body": ["dump(${1:Target}, ${2:NewVars}, ${3:Constraints})$4\n$0" ], "description":" dump(+Target,-NewVars,-Constraints) is det.\n\n Returns in , the constraints that currently hold on Target where\n all variables in are copied to new variables in and the\n constraints are given on these new variables. In short, you can safely\n manipulate and without changing the constraints on\n .", "prefix":"dump" }, "clpqr/dump:projecting_assert/1": { "body": ["projecting_assert(${1:'Param1'})$2\n$0" ], "description":"projecting_assert('Param1')", "prefix":"projecting_assert" }, "clpqr/geler:collect_nonlin/3": { "body": ["collect_nonlin(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"collect_nonlin('Param1','Param2','Param3')", "prefix":"collect_nonlin" }, "clpqr/geler:geler/3": { "body": ["geler(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"geler('Param1','Param2','Param3')", "prefix":"geler" }, "clpqr/geler:project_nonlin/3": { "body": ["project_nonlin(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"project_nonlin('Param1','Param2','Param3')", "prefix":"project_nonlin" }, "clpqr/itf:clp_type/2": { "body": ["clp_type(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"clp_type('Param1','Param2')", "prefix":"clp_type" }, "clpqr/itf:dump_linear/3": { "body": ["dump_linear(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"dump_linear('Param1','Param2','Param3')", "prefix":"dump_linear" }, "clpqr/itf:dump_nonzero/3": { "body": ["dump_nonzero(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"dump_nonzero('Param1','Param2','Param3')", "prefix":"dump_nonzero" }, "clpqr/ordering:arrangement/2": { "body": ["arrangement(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"arrangement('Param1','Param2')", "prefix":"arrangement" }, "clpqr/ordering:combine/3": { "body": ["combine(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"combine('Param1','Param2','Param3')", "prefix":"combine" }, "clpqr/ordering:ordering/1": { "body": ["ordering(${1:'Param1'})$2\n$0" ], "description":"ordering('Param1')", "prefix":"ordering" }, "clpqr/project:drop_dep/1": { "body": ["drop_dep(${1:'Param1'})$2\n$0" ], "description":"drop_dep('Param1')", "prefix":"drop_dep" }, "clpqr/project:drop_dep_one/1": { "body": ["drop_dep_one(${1:'Param1'})$2\n$0" ], "description":"drop_dep_one('Param1')", "prefix":"drop_dep_one" }, "clpqr/project:make_target_indep/2": { "body": ["make_target_indep(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"make_target_indep('Param1','Param2')", "prefix":"make_target_indep" }, "clpqr/project:project_attributes/2": { "body": ["project_attributes(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"project_attributes('Param1','Param2')", "prefix":"project_attributes" }, "clpqr/redund:redundancy_vars/1": { "body": ["redundancy_vars(${1:'Param1'})$2\n$0" ], "description":"redundancy_vars('Param1')", "prefix":"redundancy_vars" }, "clpqr/redund:systems/3": { "body": ["systems(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"systems('Param1','Param2','Param3')", "prefix":"systems" }, "clpqr:bb_inf/3": { "body":"bb_inf(${1:Ints}, ${2:Expression}, ${3:Inf})$4\n$0", "description":"bb_inf(+Ints, +Expression, -Inf).\nThe same as bb_inf/5 or bb_inf/4 but without returning the values of the integers. In CLP(R), an error margin of 0.001 is used.", "prefix":"bb_inf" }, "clpqr:bb_inf/4": { "body":"bb_inf(${1:Ints}, ${2:Expression}, ${3:Inf}, ${4:Vertex})$5\n$0", "description":"bb_inf(+Ints, +Expression, -Inf, -Vertex).\nThis predicate is offered in CLP(Q) only. It behaves the same as bb_inf/5 but does not use an error margin.", "prefix":"bb_inf" }, "clpqr:bb_inf/5": { "body":"bb_inf(${1:Ints}, ${2:Expression}, ${3:Inf}, ${4:Vertex}, ${5:Eps})$6\n$0", "description":"bb_inf(+Ints, +Expression, -Inf, -Vertex, +Eps).\nThis predicate is offered in CLP(R) only. It computes the infimum of Expression within the current constraint store, with the additional constraint that in that infimum, all variables in Ints have integral values. Vertex will contain the values of Ints in the infimum. Eps denotes how much a value may differ from an integer to be considered an integer. E.g. when Eps = 0.001, then X = 4.999 will be considered as an integer (5 in this case). Eps should be between 0 and 0.5.", "prefix":"bb_inf" }, "clpqr:dump/3": { "body":"dump(${1:Target}, ${2:Newvars}, ${3:CodedAnswer})$4\n$0", "description":"dump(+Target, +Newvars, -CodedAnswer).\nReturns the constraints on Target in the list CodedAnswer where all variables of Target have been replaced by NewVars. This operation does not change the constraint store. E.g. in \n\ndump([X,Y,Z],[x,y,z],Cons)\n\n Cons will contain the constraints on X, Y and Z, where these variables have been replaced by atoms x, y and z. \n\n\n\n", "prefix":"dump" }, "clpqr:entailed/1": { "body":"entailed(${1:Constraint})$2\n$0", "description":"entailed(+Constraint).\nSucceeds if Constraint is necessarily true within the current constraint store. This means that adding the negation of the constraint to the store results in failure.", "prefix":"entailed" }, "clpqr:inf/2": { "body":"inf(${1:Expression}, ${2:Inf})$3\n$0", "description":"inf(+Expression, -Inf).\nComputes the infimum of Expression within the current state of the constraint store and returns that infimum in Inf. This predicate does not change the constraint store.", "prefix":"inf" }, "clpqr:maximize/1": { "body":"maximize(${1:Expression})$2\n$0", "description":"maximize(+Expression).\nMaximizes Expression within the current constraint store. This is the same as computing the supremum and equating the expression to that supremum.", "prefix":"maximize" }, "clpqr:minimize/1": { "body":"minimize(${1:Expression})$2\n$0", "description":"minimize(+Expression).\nMinimizes Expression within the current constraint store. This is the same as computing the infimum and equating the expression to that infimum.", "prefix":"minimize" }, "clpqr:sup/2": { "body":"sup(${1:Expression}, ${2:Sup})$3\n$0", "description":"sup(+Expression, -Sup).\nComputes the supremum of Expression within the current state of the constraint store and returns that supremum in Sup. This predicate does not change the constraint store.", "prefix":"sup" }, "clpr/bb_r:bb_inf/3": { "body": ["bb_inf(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"bb_inf('Param1','Param2','Param3')", "prefix":"bb_inf" }, "clpr/bb_r:bb_inf/5": { "body": [ "bb_inf(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'})$6\n$0" ], "description":"bb_inf('Param1','Param2','Param3','Param4','Param5')", "prefix":"bb_inf" }, "clpr/bb_r:vertex_value/2": { "body": ["vertex_value(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"vertex_value('Param1','Param2')", "prefix":"vertex_value" }, "clpr/bv_r:allvars/2": { "body": ["allvars(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"allvars('Param1','Param2')", "prefix":"allvars" }, "clpr/bv_r:backsubst/3": { "body": ["backsubst(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"backsubst('Param1','Param2','Param3')", "prefix":"backsubst" }, "clpr/bv_r:backsubst_delta/4": { "body": [ "backsubst_delta(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"backsubst_delta('Param1','Param2','Param3','Param4')", "prefix":"backsubst_delta" }, "clpr/bv_r:basis_add/2": { "body": ["basis_add(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"basis_add('Param1','Param2')", "prefix":"basis_add" }, "clpr/bv_r:dec_step/2": { "body": ["dec_step(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"dec_step('Param1','Param2')", "prefix":"dec_step" }, "clpr/bv_r:deref/2": { "body": ["deref(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"deref('Param1','Param2')", "prefix":"deref" }, "clpr/bv_r:deref_var/2": { "body": ["deref_var(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"deref_var('Param1','Param2')", "prefix":"deref_var" }, "clpr/bv_r:detach_bounds/1": { "body": ["detach_bounds(${1:'Param1'})$2\n$0" ], "description":"detach_bounds('Param1')", "prefix":"detach_bounds" }, "clpr/bv_r:detach_bounds_vlv/5": { "body": [ "detach_bounds_vlv(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'})$6\n$0" ], "description":"detach_bounds_vlv('Param1','Param2','Param3','Param4','Param5')", "prefix":"detach_bounds_vlv" }, "clpr/bv_r:determine_active_dec/1": { "body": ["determine_active_dec(${1:'Param1'})$2\n$0" ], "description":"determine_active_dec('Param1')", "prefix":"determine_active_dec" }, "clpr/bv_r:determine_active_inc/1": { "body": ["determine_active_inc(${1:'Param1'})$2\n$0" ], "description":"determine_active_inc('Param1')", "prefix":"determine_active_inc" }, "clpr/bv_r:dump_nz/5": { "body": [ "dump_nz(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'})$6\n$0" ], "description":"dump_nz('Param1','Param2','Param3','Param4','Param5')", "prefix":"dump_nz" }, "clpr/bv_r:dump_var/6": { "body": [ "dump_var(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'}, ${6:'Param6'})$7\n$0" ], "description":"dump_var('Param1','Param2','Param3','Param4','Param5','Param6')", "prefix":"dump_var" }, "clpr/bv_r:export_binding/1": { "body": ["export_binding(${1:'Param1'})$2\n$0" ], "description":"export_binding('Param1')", "prefix":"export_binding" }, "clpr/bv_r:export_binding/2": { "body": ["export_binding(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"export_binding('Param1','Param2')", "prefix":"export_binding" }, "clpr/bv_r:get_or_add_class/2": { "body": ["get_or_add_class(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"get_or_add_class('Param1','Param2')", "prefix":"get_or_add_class" }, "clpr/bv_r:inc_step/2": { "body": ["inc_step(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"inc_step('Param1','Param2')", "prefix":"inc_step" }, "clpr/bv_r:inf/2": { "body": ["inf(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"inf('Param1','Param2')", "prefix":"inf" }, "clpr/bv_r:inf/4": { "body": [ "inf(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"inf('Param1','Param2','Param3','Param4')", "prefix":"inf" }, "clpr/bv_r:intro_at/3": { "body": ["intro_at(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"intro_at('Param1','Param2','Param3')", "prefix":"intro_at" }, "clpr/bv_r:iterate_dec/2": { "body": ["iterate_dec(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"iterate_dec('Param1','Param2')", "prefix":"iterate_dec" }, "clpr/bv_r:lb/3": { "body": ["lb(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"lb('Param1','Param2','Param3')", "prefix":"lb" }, "clpr/bv_r:log_deref/4": { "body": [ "log_deref(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"log_deref('Param1','Param2','Param3','Param4')", "prefix":"log_deref" }, "clpr/bv_r:maximize/1": { "body": ["maximize(${1:'Param1'})$2\n$0" ], "description":"maximize('Param1')", "prefix":"maximize" }, "clpr/bv_r:minimize/1": { "body": ["minimize(${1:'Param1'})$2\n$0" ], "description":"minimize('Param1')", "prefix":"minimize" }, "clpr/bv_r:pivot/5": { "body": [ "pivot(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'})$6\n$0" ], "description":"pivot('Param1','Param2','Param3','Param4','Param5')", "prefix":"pivot" }, "clpr/bv_r:pivot_a/4": { "body": [ "pivot_a(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"pivot_a('Param1','Param2','Param3','Param4')", "prefix":"pivot_a" }, "clpr/bv_r:rcbl_status/6": { "body": [ "rcbl_status(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'}, ${6:'Param6'})$7\n$0" ], "description":"rcbl_status('Param1','Param2','Param3','Param4','Param5','Param6')", "prefix":"rcbl_status" }, "clpr/bv_r:reconsider/1": { "body": ["reconsider(${1:'Param1'})$2\n$0" ], "description":"reconsider('Param1')", "prefix":"reconsider" }, "clpr/bv_r:same_class/2": { "body": ["same_class(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"same_class('Param1','Param2')", "prefix":"same_class" }, "clpr/bv_r:solve/1": { "body": ["solve(${1:'Param1'})$2\n$0" ], "description":"solve('Param1')", "prefix":"solve" }, "clpr/bv_r:solve_ord_x/3": { "body": ["solve_ord_x(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"solve_ord_x('Param1','Param2','Param3')", "prefix":"solve_ord_x" }, "clpr/bv_r:sup/2": { "body": ["sup(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"sup('Param1','Param2')", "prefix":"sup" }, "clpr/bv_r:sup/4": { "body": [ "sup(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"sup('Param1','Param2','Param3','Param4')", "prefix":"sup" }, "clpr/bv_r:ub/3": { "body": ["ub(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"ub('Param1','Param2','Param3')", "prefix":"ub" }, "clpr/bv_r:unconstrained/4": { "body": [ "unconstrained(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"unconstrained('Param1','Param2','Param3','Param4')", "prefix":"unconstrained" }, "clpr/bv_r:var_intern/2": { "body": ["var_intern(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"var_intern('Param1','Param2')", "prefix":"var_intern" }, "clpr/bv_r:var_intern/3": { "body": ["var_intern(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"var_intern('Param1','Param2','Param3')", "prefix":"var_intern" }, "clpr/bv_r:var_with_def_assign/2": { "body": ["var_with_def_assign(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"var_with_def_assign('Param1','Param2')", "prefix":"var_with_def_assign" }, "clpr/bv_r:var_with_def_intern/4": { "body": [ "var_with_def_intern(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"var_with_def_intern('Param1','Param2','Param3','Param4')", "prefix":"var_with_def_intern" }, "clpr/fourmotz_r:fm_elim/3": { "body": ["fm_elim(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"fm_elim('Param1','Param2','Param3')", "prefix":"fm_elim" }, "clpr/ineq_r:ineq/4": { "body": [ "ineq(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"ineq('Param1','Param2','Param3','Param4')", "prefix":"ineq" }, "clpr/ineq_r:ineq_one/4": { "body": [ "ineq_one(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"ineq_one('Param1','Param2','Param3','Param4')", "prefix":"ineq_one" }, "clpr/ineq_r:ineq_one_n_n_0/1": { "body": ["ineq_one_n_n_0(${1:'Param1'})$2\n$0" ], "description":"ineq_one_n_n_0('Param1')", "prefix":"ineq_one_n_n_0" }, "clpr/ineq_r:ineq_one_n_p_0/1": { "body": ["ineq_one_n_p_0(${1:'Param1'})$2\n$0" ], "description":"ineq_one_n_p_0('Param1')", "prefix":"ineq_one_n_p_0" }, "clpr/ineq_r:ineq_one_s_n_0/1": { "body": ["ineq_one_s_n_0(${1:'Param1'})$2\n$0" ], "description":"ineq_one_s_n_0('Param1')", "prefix":"ineq_one_s_n_0" }, "clpr/ineq_r:ineq_one_s_p_0/1": { "body": ["ineq_one_s_p_0(${1:'Param1'})$2\n$0" ], "description":"ineq_one_s_p_0('Param1')", "prefix":"ineq_one_s_p_0" }, "clpr/itf_r:do_checks/8": { "body": [ "do_checks(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'}, ${6:'Param6'}, ${7:'Param7'}, ${8:'Param8'})$9\n$0" ], "description":"do_checks('Param1','Param2','Param3','Param4','Param5','Param6','Param7','Param8')", "prefix":"do_checks" }, "clpr/nf_r:entailed/1": { "body": ["entailed(${1:'Param1'})$2\n$0" ], "description":"entailed('Param1')", "prefix":"entailed" }, "clpr/nf_r:nf/2": { "body": ["nf(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"nf('Param1','Param2')", "prefix":"nf" }, "clpr/nf_r:nf2term/2": { "body": ["nf2term(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"nf2term('Param1','Param2')", "prefix":"nf2term" }, "clpr/nf_r:nf_constant/2": { "body": ["nf_constant(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"nf_constant('Param1','Param2')", "prefix":"nf_constant" }, "clpr/nf_r:repair/2": { "body": ["repair(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"repair('Param1','Param2')", "prefix":"repair" }, "clpr/nf_r:split/3": { "body": ["split(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"split('Param1','Param2','Param3')", "prefix":"split" }, "clpr/nf_r:wait_linear/3": { "body": ["wait_linear(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"wait_linear('Param1','Param2','Param3')", "prefix":"wait_linear" }, "clpr/store_r:add_linear_11/3": { "body": ["add_linear_11(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"add_linear_11('Param1','Param2','Param3')", "prefix":"add_linear_11" }, "clpr/store_r:add_linear_f1/4": { "body": [ "add_linear_f1(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"add_linear_f1('Param1','Param2','Param3','Param4')", "prefix":"add_linear_f1" }, "clpr/store_r:add_linear_ff/5": { "body": [ "add_linear_ff(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'})$6\n$0" ], "description":"add_linear_ff('Param1','Param2','Param3','Param4','Param5')", "prefix":"add_linear_ff" }, "clpr/store_r:delete_factor/4": { "body": [ "delete_factor(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"delete_factor('Param1','Param2','Param3','Param4')", "prefix":"delete_factor" }, "clpr/store_r:indep/2": { "body": ["indep(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"indep('Param1','Param2')", "prefix":"indep" }, "clpr/store_r:isolate/3": { "body": ["isolate(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"isolate('Param1','Param2','Param3')", "prefix":"isolate" }, "clpr/store_r:mult_hom/3": { "body": ["mult_hom(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"mult_hom('Param1','Param2','Param3')", "prefix":"mult_hom" }, "clpr/store_r:mult_linear_factor/3": { "body": [ "mult_linear_factor(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"mult_linear_factor('Param1','Param2','Param3')", "prefix":"mult_linear_factor" }, "clpr/store_r:nf2sum/3": { "body": ["nf2sum(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"nf2sum('Param1','Param2','Param3')", "prefix":"nf2sum" }, "clpr/store_r:nf_coeff_of/3": { "body": ["nf_coeff_of(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"nf_coeff_of('Param1','Param2','Param3')", "prefix":"nf_coeff_of" }, "clpr/store_r:nf_rhs_x/4": { "body": [ "nf_rhs_x(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"nf_rhs_x('Param1','Param2','Param3','Param4')", "prefix":"nf_rhs_x" }, "clpr/store_r:nf_substitute/4": { "body": [ "nf_substitute(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"nf_substitute('Param1','Param2','Param3','Param4')", "prefix":"nf_substitute" }, "clpr/store_r:normalize_scalar/2": { "body": ["normalize_scalar(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"normalize_scalar('Param1','Param2')", "prefix":"normalize_scalar" }, "clpr/store_r:renormalize/2": { "body": ["renormalize(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"renormalize('Param1','Param2')", "prefix":"renormalize" }, "clpr:bb_inf/3": { "body": ["bb_inf(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"bb_inf('Param1','Param2','Param3')", "prefix":"bb_inf" }, "clpr:bb_inf/5": { "body": [ "bb_inf(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'})$6\n$0" ], "description":"bb_inf('Param1','Param2','Param3','Param4','Param5')", "prefix":"bb_inf" }, "clpr:clp_type/2": { "body": ["clp_type(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"clp_type('Param1','Param2')", "prefix":"clp_type" }, "clpr:dump/3": { "body": ["dump(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"dump('Param1','Param2','Param3')", "prefix":"dump" }, "clpr:entailed/1": { "body": ["entailed(${1:'Param1'})$2\n$0" ], "description":"entailed('Param1')", "prefix":"entailed" }, "clpr:inf/2": { "body": ["inf(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"inf('Param1','Param2')", "prefix":"inf" }, "clpr:inf/4": { "body": [ "inf(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"inf('Param1','Param2','Param3','Param4')", "prefix":"inf" }, "clpr:maximize/1": { "body": ["maximize(${1:'Param1'})$2\n$0" ], "description":"maximize('Param1')", "prefix":"maximize" }, "clpr:minimize/1": { "body": ["minimize(${1:'Param1'})$2\n$0" ], "description":"minimize('Param1')", "prefix":"minimize" }, "clpr:ordering/1": { "body": ["ordering(${1:'Param1'})$2\n$0" ], "description":"ordering('Param1')", "prefix":"ordering" }, "clpr:sup/2": { "body": ["sup(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"sup('Param1','Param2')", "prefix":"sup" }, "clpr:sup/4": { "body": [ "sup(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"sup('Param1','Param2','Param3','Param4')", "prefix":"sup" }, "code_type/2": { "body":"code_type(${1:Code}, ${2:Type})$3\n$0", "description":"code_type(?Code, ?Type).\nAs char_type/2, but uses character codes rather than one-character atoms. Please note that both predicates are as flexible as possible. They handle either representation if the argument is instantiated and will instantiate only with an integer code or a one-character atom, depending of the version used. See also the Prolog flag double_quotes, atom_chars/2 and atom_codes/2.", "prefix":"code_type" }, "codesio:format_to_codes/3": { "body": ["format_to_codes(${1:Format}, ${2:Args}, ${3:Codes})$4\n$0" ], "description":" format_to_codes(+Format, +Args, -Codes) is det.\n\n Use format/2 to write to a list of character codes.", "prefix":"format_to_codes" }, "codesio:format_to_codes/4": { "body": [ "format_to_codes(${1:Format}, ${2:Args}, ${3:Codes}, ${4:Tail})$5\n$0" ], "description":" format_to_codes(+Format, +Args, -Codes, ?Tail) is det.\n\n Use format/2 to write to a difference list of character codes.", "prefix":"format_to_codes" }, "codesio:open_codes_stream/2": { "body": ["open_codes_stream(${1:Codes}, ${2:Stream})$3\n$0" ], "description":" open_codes_stream(+Codes, -Stream) is det.\n\n Open Codes as an input stream.\n\n @see open_string/2.", "prefix":"open_codes_stream" }, "codesio:read_from_codes/2": { "body": ["read_from_codes(${1:Codes}, ${2:Term})$3\n$0" ], "description":" read_from_codes(+Codes, -Term) is det.\n\n Read Codes into Term.\n\n @compat The SWI-Prolog version does not require Codes to end\n in a full-stop.", "prefix":"read_from_codes" }, "codesio:read_term_from_codes/3": { "body": ["read_term_from_codes(${1:Codes}, ${2:Term}, ${3:Options})$4\n$0" ], "description":" read_term_from_codes(+Codes, -Term, +Options) is det.\n\n Read Codes into Term. Options are processed by read_term/3.\n\n @compat sicstus", "prefix":"read_term_from_codes" }, "codesio:with_output_to_codes/2": { "body": ["with_output_to_codes(${1:Goal}, ${2:Codes})$3\n$0" ], "description":" with_output_to_codes(:Goal, Codes) is det.\n\n Run Goal with as once/1. Output written to =current_output=\n is collected in Codes.", "prefix":"with_output_to_codes" }, "codesio:with_output_to_codes/3": { "body": ["with_output_to_codes(${1:Goal}, ${2:Codes}, ${3:Tail})$4\n$0" ], "description":" with_output_to_codes(:Goal, -Codes, ?Tail) is det.\n\n Run Goal with as once/1. Output written to =current_output=\n is collected in Codes\\Tail.", "prefix":"with_output_to_codes" }, "codesio:with_output_to_codes/4": { "body": [ "with_output_to_codes(${1:Goal}, ${2:Stream}, ${3:Codes}, ${4:Tail})$5\n$0" ], "description":" with_output_to_codes(:Goal, -Stream, -Codes, ?Tail) is det.\n\n As with_output_to_codes/3, but Stream is unified with the\n temporary stream. This predicate exists for compatibility\n reasons. In SWI-Prolog, the temporary stream is also available\n as `current_output`.", "prefix":"with_output_to_codes" }, "codesio:write_term_to_codes/3": { "body": ["write_term_to_codes(${1:Term}, ${2:Codes}, ${3:Options})$4\n$0" ], "description":" write_term_to_codes(+Term, -Codes, +Options) is det.\n\n True when Codes is a string that matches the output of\n write_term/3 using Options.", "prefix":"write_term_to_codes" }, "codesio:write_term_to_codes/4": { "body": [ "write_term_to_codes(${1:Term}, ${2:Codes}, ${3:Tail}, ${4:Options})$5\n$0" ], "description":" write_term_to_codes(+Term, -Codes, ?Tail, +Options) is det.\n\n True when Codes\\Tail is a difference list containing the\n character codes that matches the output of write_term/3 using\n Options.", "prefix":"write_term_to_codes" }, "codesio:write_to_codes/2": { "body": ["write_to_codes(${1:Term}, ${2:Codes})$3\n$0" ], "description":" write_to_codes(+Term, -Codes)\n\n Codes is a list of character codes produced by write/1 on Term.", "prefix":"write_to_codes" }, "codesio:write_to_codes/3": { "body": ["write_to_codes(${1:Term}, ${2:Codes}, ${3:Tail})$4\n$0" ], "description":" write_to_codes(+Term, -Codes, ?Tail)\n\n Codes is a difference-list of character codes produced by write/1 on Term.", "prefix":"write_to_codes" }, "coinduction:coinductive/1": { "body": ["coinductive(${1:Spec})$2\n$0" ], "description":" coinductive(:Spec)\n\n The declaration :- coinductive name/arity, ... defines\n predicates as _coinductive_. The predicate definition is wrapped\n such that goals unify with their ancestors. This directive must\n preceed all clauses of the predicate.", "prefix":"coinductive" }, "collation_key/2": { "body":"collation_key(${1:Atom}, ${2:Key})$3\n$0", "description":"collation_key(+Atom, -Key).\nCreate a Key from Atom for locale-specific comparison. The key is defined such that if the key of atom A precedes the key of atom B in the standard order of terms, A is alphabetically smaller than B using the sort order of the current locale. The predicate collation_key/2 is used by locale_sort/2 from library(sort). Please examine the implementation of locale_sort/2 as an example of using this call. \n\nThe Key is an implementation-defined and generally unreadable string. On systems that do not support locale handling, Key is simply unified with Atom.\n\n", "prefix":"collation_key" }, "compare/3": { "body":"compare(${1:Order}, ${2:Term1}, ${3:Term2})$4\n$0", "description":"[ISO]compare(?Order, @Term1, @Term2).\nDetermine or test the Order between two terms in the standard order of terms. Order is one of <, > or =, with the obvious meaning.", "prefix":"compare" }, "compare_strings/4": { "body":"compare_strings(${1:Table}, ${2:S1}, ${3:S2}, ${4:Result})$5\n$0", "description":"compare_strings(+Table, +S1, +S2, -Result).\nCompare two strings using the named Table. S1 and S2 may be atoms, strings or code-lists. Result is one of the atoms <, = or >.", "prefix":"compare_strings" }, "compile_aux_clauses/1": { "body":"compile_aux_clauses(${1:Clauses})$2\n$0", "description":"compile_aux_clauses(+Clauses).\nCompile clauses on behalf of goal_expansion/2. This predicate compiles the argument clauses into static predicates, associating the predicates with the current file but avoids changing the notion of current predicate and therefore discontiguous warnings. Note that in some cases multiple expansions of similar goals can share the same compiled auxiliary predicate. In such cases, the implementation of goal_expansion/2 can use predicate_property/2 using the property defined to test whether the predicate is already defined in the current context.\n\n", "prefix":"compile_aux_clauses" }, "compile_predicates/1": { "body":"compile_predicates(${1:ListOfPredicateIndicators})$2\n$0", "description":"compile_predicates(:ListOfPredicateIndicators).\nCompile a list of specified dynamic predicates (see dynamic/1 and assert/1) into normal static predicates. This call tells the Prolog environment the definition will not change anymore and further calls to assert/1 or retract/1 on the named predicates raise a permission error. This predicate is designed to deal with parts of the program that are generated at runtime but do not change during the remainder of the program execution.73The specification of this predicate is from Richard O'Keefe. The implementation is allowed to optimise the predicate. This is not yet implemented. In multithreaded Prolog, however, static code runs faster as it does not require synchronisation. This is particularly true on SMP hardware.", "prefix":"compile_predicates" }, "compiling/0": { "body":"compiling$1\n$0", "description":"compiling.\nTrue if the system is compiling source files with the -c option or qcompile/1 into an intermediate code file. Can be used to perform conditional code optimisations in term_expansion/2 (see also the -O option) or to omit execution of directives during compilation.", "prefix":"compiling" }, "compound/1": { "body":"compound(${1:Term})$2\n$0", "description":"[ISO]compound(@Term).\nTrue if Term is bound to a compound term. See also functor/3 =../2, compound_name_arity/3 and compound_name_arguments/3.", "prefix":"compound" }, "compound_name_arguments/3": { "body":"compound_name_arguments(${1:Compound}, ${2:Name}, ${3:Arguments})$4\n$0", "description":"compound_name_arguments(?Compound, ?Name, ?Arguments).\nRationalized version of =../2 that can compose and decompose compound terms with zero arguments. See also compound_name_arity/3.", "prefix":"compound_name_arguments" }, "compound_name_arity/3": { "body":"compound_name_arity(${1:Compound}, ${2:Name}, ${3:Arity})$4\n$0", "description":"compound_name_arity(?Compound, ?Name, ?Arity).\nRationalized version of functor/3 that only works for compound terms and can examine and create compound terms with zero arguments (e.g, name(). See also compound_name_arguments/3.", "prefix":"compound_name_arity" }, "consult/1": { "body":"consult(${1:File})$2\n$0", "description":"consult(:File).\nRead File as a Prolog source file. Calls to consult/1 may be abbreviated by just typing a number of filenames in a list. Examples: ?- consult(load). % consult load or load.pl ?- [library(lists)]. % load library lists ?- [user]. % Type program on the terminal The predicate consult/1 is equivalent to load_files(File, []), except for handling the special file user, which reads clauses from the terminal. See also the stream(Input) option of load_files/2. Abbreviation using ?- [file1,file2]. does not work for the empty list ([]). This facility is implemented by defining the list as a predicate. Applications may only rely on using the list abbreviation at the Prolog toplevel and in directives.\n\n", "prefix":"consult" }, "context_module/1": { "body":"context_module(${1:Module})$2\n$0", "description":"context_module(-Module).\nUnify Module with the context module of the current goal. context_module/1 itself is, of course, transparent.", "prefix":"context_module" }, "copy_predicate_clauses/2": { "body":"copy_predicate_clauses(${1:From}, ${2:To})$3\n$0", "description":"copy_predicate_clauses(:From, :To).\nCopy all clauses of predicate From to To. The predicate To must be dynamic or undefined. If To is undefined, it is created as a dynamic predicate holding a copy of the clauses of From. If To is a dynamic predicate, the clauses of From are added (as in assertz/1) to the clauses of To. To and From must have the same arity. Acts as if defined by the program below, but at a much better performance by avoiding decompilation and compilation. \n\ncopy_predicate_clauses(From, To) :-\n head(From, MF:FromHead),\n head(To, MT:ToHead),\n FromHead =.. [_|Args],\n ToHead =.. [_|Args],\n forall(clause(MF:FromHead, Body),\n assertz(MT:ToHead, Body)).\n\nhead(From, M:Head) :-\n strip_module(From, M, Name/Arity),\n functor(Head, Name, Arity).\n\n ", "prefix":"copy_predicate_clauses" }, "copy_stream_data/2": { "body":"copy_stream_data(${1:StreamIn}, ${2:StreamOut})$3\n$0", "description":"copy_stream_data(+StreamIn, +StreamOut).\nCopy all (remaining) data from StreamIn to StreamOut.", "prefix":"copy_stream_data" }, "copy_stream_data/3": { "body":"copy_stream_data(${1:StreamIn}, ${2:StreamOut}, ${3:Len})$4\n$0", "description":"copy_stream_data(+StreamIn, +StreamOut, +Len).\nCopy Len codes from StreamIn to StreamOut. Note that the copy is done using the semantics of get_code/2 and put_code/2, taking care of possibly recoding that needs to take place between two text files. See section 2.18.1.", "prefix":"copy_stream_data" }, "copy_term/2": { "body":"copy_term(${1:In}, ${2:Out})$3\n$0", "description":"[ISO]copy_term(+In, -Out).\nCreate a version of In with renamed (fresh) variables and unify it to Out. Attributed variables (see section 7.1) have their attributes copied. The implementation of copy_term/2 can deal with infinite trees (cyclic terms). As pure Prolog cannot distinguish a ground term from another ground term with exactly the same structure, ground sub-terms are shared between In and Out. Sharing ground terms does affect setarg/3. SWI-Prolog provides duplicate_term/2 to create a true copy of a term.", "prefix":"copy_term" }, "copy_term/3": { "body":"copy_term(${1:Term}, ${2:Copy}, ${3:Gs})$4\n$0", "description":"copy_term(+Term, -Copy, -Gs).\nCreate a regular term Copy as a copy of Term (without any attributes), and a list Gs of goals that represents the attributes. The goal maplist(call, Gs) recreates the attributes for Copy. The nonterminal attribute_goals/3, as defined in the modules the attributes stem from, is used to convert attributes to lists of goals. This building block is used by the top level to report pending attributes in a portable and understandable fashion. This predicate is the preferred way to reason about and communicate terms with constraints. \n\nThe form copy_term(Term, Term, Gs) can be used to reason about the constraints in which Term is involved.\n\n", "prefix":"copy_term" }, "copy_term_nat/2": { "body":"copy_term_nat(${1:Term}, ${2:Copy})$3\n$0", "description":"copy_term_nat(+Term, -Copy).\nAs copy_term/2. Attributes, however, are not copied but replaced by fresh variables.", "prefix":"copy_term_nat" }, "copysign/2": { "body":"copysign(${1:Expr1}, ${2:Expr2})$3\n$0", "description":"[ISO]copysign(+Expr1, +Expr2).\nEvaluate to X, where the absolute value of X equals the absolute value of Expr1 and the sign of X matches the sign of Expr2. This function is based on copysign() from C99, which works on double precision floats and deals with handling the sign of special floating point values such as -0.0. Our implementation follows C99 if both arguments are floats. Otherwise, copysign/2 evaluates to Expr1 if the sign of both expressions matches or -Expr1 if the signs do not match. Here, we use the extended notion of signs for floating point numbers, where the sign of -0.0 and other special floats is negative.", "prefix":"copysign" }, "cos/1": { "body":"cos(${1:Expr})$2\n$0", "description":"[ISO]cos(+Expr).\nResult = cos(Expr). Expr is the angle in radians.", "prefix":"cos" }, "cosh/1": { "body":"cosh(${1:Expr})$2\n$0", "description":"cosh(+Expr).\nResult = cosh(Expr). The hyperbolic cosine of X is defined as e ** X + e ** -X / 2.", "prefix":"cosh" }, "cputime/0": { "body":"cputime$1\n$0", "description":"cputime.\nEvaluate to a floating point number expressing the CPU time (in seconds) used by Prolog up till now. See also statistics/2 and time/1.", "prefix":"cputime" }, "cql/cql:attribute_domain/4": { "body":"attribute_domain(${1:Schema}, ${2:TableName}, ${3:ColumnName}, ${4:Domain})$5\n$0", "description":"attribute_domain(+Schema, +TableName, +ColumnName, -Domain).\n", "prefix":"attribute_domain" }, "cql/cql:cql_access_token_to_user_id/2": { "body": ["cql_access_token_to_user_id(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"cql_access_token_to_user_id('Param1','Param2')", "prefix":"cql_access_token_to_user_id" }, "cql/cql:cql_data_type/10": { "body": [ "cql_data_type(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'}, ${6:'Param6'}, ${7:'Param7'}, ${8:'Param8'}, ${9:'Param9'}, ${10:'Param10'})$11\n$0" ], "description":"cql_data_type('Param1','Param2','Param3','Param4','Param5','Param6','Param7','Param8','Param9','Param10')", "prefix":"cql_data_type" }, "cql/cql:cql_error/3": { "body": ["cql_error(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"cql_error('Param1','Param2','Param3')", "prefix":"cql_error" }, "cql/cql:cql_event_notification_table/2": { "body":"cql_event_notification_table(${1:Schema}, ${2:TableName})$3\n$0", "description":"[multifile]cql_event_notification_table(+Schema, +TableName).\n", "prefix":"cql_event_notification_table" }, "cql/cql:cql_execute/1": { "body": ["cql_execute(${1:'Param1'})$2\n$0" ], "description":"cql_execute('Param1')", "prefix":"cql_execute" }, "cql/cql:cql_get_module_default_schema/2": { "body":"cql_get_module_default_schema(${1:Module}, ${2:ModuleDefaultSchema})$3\n$0", "description":"cql_get_module_default_schema(+Module, ?ModuleDefaultSchema).\n", "prefix":"cql_get_module_default_schema" }, "cql/cql:cql_goal_expansion/3": { "body":"cql_goal_expansion(${1:Schema}, ${2:Cql}, ${3:GoalExpansion})$4\n$0", "description":"cql_goal_expansion(?Schema, ?Cql, ?GoalExpansion).\nExpand at compile time if the first term is a list of unbound input variables Expand at runtime if the first term is compile_at_runtime\n\n", "prefix":"cql_goal_expansion" }, "cql/cql:cql_history_attribute/3": { "body":"cql_history_attribute(${1:Schema}, ${2:TableName}, ${3:ColumnName})$4\n$0", "description":"[multifile]cql_history_attribute(+Schema, +TableName, +ColumnName).\n", "prefix":"cql_history_attribute" }, "cql/cql:cql_identity/3": { "body": ["cql_identity(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"cql_identity('Param1','Param2','Param3')", "prefix":"cql_identity" }, "cql/cql:cql_log/4": { "body": [ "cql_log(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"cql_log('Param1','Param2','Param3','Param4')", "prefix":"cql_log" }, "cql/cql:cql_normalize_name/3": { "body":"cql_normalize_name(${1:DBMS}, ${2:Name}, ${3:NormalizedName})$4\n$0", "description":"cql_normalize_name(+DBMS, +Name, -NormalizedName).\nNormalize a name which is potentially longer than the DBMS allows to a unique truncation", "prefix":"cql_normalize_name" }, "cql/cql:cql_odbc_select_statement/4": { "body": [ "cql_odbc_select_statement(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"cql_odbc_select_statement('Param1','Param2','Param3','Param4')", "prefix":"cql_odbc_select_statement" }, "cql/cql:cql_odbc_state_change_statement/7": { "body": [ "cql_odbc_state_change_statement(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'}, ${6:'Param6'}, ${7:'Param7'})$8\n$0" ], "description":"cql_odbc_state_change_statement('Param1','Param2','Param3','Param4','Param5','Param6','Param7')", "prefix":"cql_odbc_state_change_statement" }, "cql/cql:cql_portray/2": { "body": ["cql_portray(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"cql_portray('Param1','Param2')", "prefix":"cql_portray" }, "cql/cql:cql_post_state_change_select_sql/4": { "body": [ "cql_post_state_change_select_sql(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"cql_post_state_change_select_sql('Param1','Param2','Param3','Param4')", "prefix":"cql_post_state_change_select_sql" }, "cql/cql:cql_pre_state_change_select_sql/7": { "body": [ "cql_pre_state_change_select_sql(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'}, ${6:'Param6'}, ${7:'Param7'})$8\n$0" ], "description":"cql_pre_state_change_select_sql('Param1','Param2','Param3','Param4','Param5','Param6','Param7')", "prefix":"cql_pre_state_change_select_sql" }, "cql/cql:cql_runtime/7": { "body":"cql_runtime(${1:Schema}, ${2:IgnoreIfNullVariables}, ${3:CqlA}, ${4:CqlB}, ${5:VariableMap}, ${6:FileName}, ${7:LineNumber})$8\n$0", "description":"cql_runtime(+Schema, +IgnoreIfNullVariables, +CqlA, +CqlB, +VariableMap, +FileName, +LineNumber).\n", "prefix":"cql_runtime" }, "cql/cql:cql_set_module_default_schema/1": { "body":"cql_set_module_default_schema(${1:Schema})$2\n$0", "description":"cql_set_module_default_schema(+Schema).\nSet the Schema for a module", "prefix":"cql_set_module_default_schema" }, "cql/cql:cql_show/2": { "body":"cql_show(${1:Goal}, ${2:Mode})$3\n$0", "description":"cql_show(:Goal, +Mode).\nCalled when ?/1, ??/1, and ???/1 applied to CQL Goal goal term Mode minimal ; explicit ; full ", "prefix":"cql_show" }, "cql/cql:cql_sql_clause/3": { "body": ["cql_sql_clause(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"cql_sql_clause('Param1','Param2','Param3')", "prefix":"cql_sql_clause" }, "cql/cql:cql_state_change_statistics_sql/8": { "body": [ "cql_state_change_statistics_sql(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'}, ${6:'Param6'}, ${7:'Param7'}, ${8:'Param8'})$9\n$0" ], "description":"cql_state_change_statistics_sql('Param1','Param2','Param3','Param4','Param5','Param6','Param7','Param8')", "prefix":"cql_state_change_statistics_sql" }, "cql/cql:cql_statement_location/2": { "body": ["cql_statement_location(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"cql_statement_location('Param1','Param2')", "prefix":"cql_statement_location" }, "cql/cql:cql_temporary_column_name/4": { "body":"cql_temporary_column_name(${1:Schema}, ${2:DataType}, ${3:ColumnName}, ${4:Type})$5\n$0", "description":"cql_temporary_column_name(?Schema, ?DataType, ?ColumnName, ?Type).\n", "prefix":"cql_temporary_column_name" }, "cql/cql:cql_transaction/3": { "body": [ "cql_transaction(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"cql_transaction('Param1','Param2','Param3')", "prefix":"cql_transaction" }, "cql/cql:cql_update_history_hook/14": { "body": [ "cql_update_history_hook(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'}, ${6:'Param6'}, ${7:'Param7'}, ${8:'Param8'}, ${9:'Param9'}, ${10:'Param10'}, ${11:'Param11'}, ${12:'Param12'}, ${13:'Param13'}, ${14:'Param14'})$15\n$0" ], "description":"cql_update_history_hook('Param1','Param2','Param3','Param4','Param5','Param6','Param7','Param8','Param9','Param10','Param11','Param12','Param13','Param14')", "prefix":"cql_update_history_hook" }, "cql/cql:cql_var_check/1": { "body": ["cql_var_check(${1:'Param1'})$2\n$0" ], "description":"cql_var_check('Param1')", "prefix":"cql_var_check" }, "cql/cql:database_attribute/8": { "body":"database_attribute(${1:EntityType}, ${2:Schema}, ${3:EntityName}, ${4:ColumnName}, ${5:DomainOrNativeType}, ${6:AllowsNulls}, ${7:IsIdentity}, ${8:ColumnDefault})$9\n$0", "description":"[nondet,multifile]database_attribute(?EntityType:table/view, ?Schema:atom, ?EntityName:atom, ?ColumnName:atom, ?DomainOrNativeType:atom, ?AllowsNulls:allows_nulls(true/false), ?IsIdentity:is_identity(true/false), ?ColumnDefault).\nCan be autoconfigured.", "prefix":"database_attribute" }, "cql/cql:database_constraint/4": { "body":"database_constraint(${1:Schema}, ${2:EntityName}, ${3:ConstraintName}, ${4:Constraint})$5\n$0", "description":"[nondet,multifile]database_constraint(?Schema:atom, ?EntityName:atom, ?ConstraintName:atom, ?Constraint).\nConstraint is one of: \n\nprimary_key(ColumnNames:list)\nforeign_key(ForeignTableName:atom, ForeignColumnNames:list, ColumnNames:list)\nunique(ColumnNames:list)\ncheck(CheckClause)\n\n In theory this can be autoconfigured too, but I have not written the code for it yet\n\n", "prefix":"database_constraint" }, "cql/cql:database_domain/2": { "body": ["database_domain(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"database_domain('Param1','Param2')", "prefix":"database_domain" }, "cql/cql:database_identity/3": { "body":"database_identity(${1:Schema}, ${2:EntityName}, ${3:ColumnName})$4\n$0", "description":"database_identity(?Schema:atom, ?EntityName:atom, ?ColumnName:atom).\n", "prefix":"database_identity" }, "cql/cql:database_key/5": { "body":"database_key(${1:Schema}, ${2:EntityName}, ${3:ConstraintName}, ${4:KeyColumnNames}, ${5:KeyType})$6\n$0", "description":"database_key(?Schema:atom, ?EntityName:atom, ?ConstraintName:atom, ?KeyColumnNames:list, ?KeyType).\nKeyColumnNames list of atom in database-supplied order KeyType identity ; 'primary key' ; unique ", "prefix":"database_key" }, "cql/cql:dbms/2": { "body":"dbms(${1:Schema}, ${2:DBMSName})$3\n$0", "description":"[multifile]dbms(+Schema, -DBMSName).\nDetermine the DBMS for a given Schema. Can be autoconfigured.", "prefix":"dbms" }, "cql/cql:default_schema/1": { "body": ["default_schema(${1:'Param1'})$2\n$0" ], "description":"default_schema('Param1')", "prefix":"default_schema" }, "cql/cql:domain_database_data_type/2": { "body": ["domain_database_data_type(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"domain_database_data_type('Param1','Param2')", "prefix":"domain_database_data_type" }, "cql/cql:in_line_format/4": { "body": [ "in_line_format(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"in_line_format('Param1','Param2','Param3','Param4')", "prefix":"in_line_format" }, "cql/cql:odbc_data_type/4": { "body":"odbc_data_type(${1:Schema}, ${2:TableSpec}, ${3:ColumnName}, ${4:OdbcDataType})$5\n$0", "description":"[multifile]odbc_data_type(+Schema, +TableSpec, +ColumnName, ?OdbcDataType).\nOdbcDataType must be a native SQL datatype, such as varchar(30) or decimal(10, 5) Can be autoconfigured.", "prefix":"odbc_data_type" }, "cql/cql:odbc_execute_with_statistics/4": { "body": [ "odbc_execute_with_statistics(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"odbc_execute_with_statistics('Param1','Param2','Param3','Param4')", "prefix":"odbc_execute_with_statistics" }, "cql/cql:primary_key_column_name/3": { "body":"primary_key_column_name(${1:Schema}, ${2:TableName}, ${3:PrimaryKeyAttributeName})$4\n$0", "description":"[multifile]primary_key_column_name(+Schema, +TableName, -PrimaryKeyAttributeName).\nCan be autoconfigured.", "prefix":"primary_key_column_name" }, "cql/cql:register_database_connection_details/2": { "body":"register_database_connection_details(${1:Schema}, ${2:ConnectionDetails})$3\n$0", "description":"[det]register_database_connection_details(+Schema:atom, +ConnectionDetails).\nThis should be called once to register the database connection details. ConnectionDetails driver_string(DriverString) or dsn(Dsn, Username, Password) ", "prefix":"register_database_connection_details" }, "cql/cql:routine_return_type/3": { "body":"routine_return_type(${1:Schema}, ${2:EntityName}, ${3:OdbcType})$4\n$0", "description":"[multifile]routine_return_type(?Schema:atom, ?EntityName:atom, ?OdbcType).\nCan be autoconfigured", "prefix":"routine_return_type" }, "cql/cql:row_count/2": { "body": ["row_count(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"row_count('Param1','Param2')", "prefix":"row_count" }, "cql/cql:sql_gripe/3": { "body": ["sql_gripe(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"sql_gripe('Param1','Param2','Param3')", "prefix":"sql_gripe" }, "cql/cql:sql_gripe_hook/3": { "body":"sql_gripe_hook(${1:Level}, ${2:Format}, ${3:Args})$4\n$0", "description":"[multifile]sql_gripe_hook(+Level, +Format, +Args).\nCalled when something dubious is found by the SQL parser.", "prefix":"sql_gripe_hook" }, "cql/cql:statistic_monitored_attribute/3": { "body":"statistic_monitored_attribute(${1:Schema}, ${2:TableName}, ${3:ColumnName})$4\n$0", "description":"statistic_monitored_attribute(+Schema, +TableName, +ColumnName).\n", "prefix":"statistic_monitored_attribute" }, "cql/cql_database:application_value_to_odbc_value/7": { "body": [ "application_value_to_odbc_value(${1:ApplicationValue}, ${2:OdbcDataType}, ${3:Schema}, ${4:TableName}, ${5:ColumnName}, ${6:Qualifiers}, ${7:OdbcValue})$8\n$0" ], "description":" application_value_to_odbc_value(+ApplicationValue, +OdbcDataType, +Schema, +TableName, +ColumnName, +Qualifiers, -OdbcValue).", "prefix":"application_value_to_odbc_value" }, "cql/cql_database:cql_transaction/3": { "body": [ "cql_transaction(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"cql_transaction('Param1','Param2','Param3')", "prefix":"cql_transaction" }, "cql/cql_database:current_transaction_id/1": { "body": ["current_transaction_id(${1:TransactionId})$2\n$0" ], "description":" current_transaction_id(-TransactionId).", "prefix":"current_transaction_id" }, "cql/cql_database:database_connection_details/2": { "body": ["database_connection_details(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"database_connection_details('Param1','Param2')", "prefix":"database_connection_details" }, "cql/cql_database:database_transaction_query_info/3": { "body": [ "database_transaction_query_info(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"database_transaction_query_info('Param1','Param2','Param3')", "prefix":"database_transaction_query_info" }, "cql/cql_database:get_transaction_context/5": { "body": [ "get_transaction_context(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'})$6\n$0" ], "description":"get_transaction_context('Param1','Param2','Param3','Param4','Param5')", "prefix":"get_transaction_context" }, "cql/cql_database:odbc_cleanup_and_disconnect/1": { "body": ["odbc_cleanup_and_disconnect(${1:Connection})$2\n$0" ], "description":" odbc_cleanup_and_disconnect(+Connection) is det.\n\n Rollback the current transaction, retract and free prepared statements, then disconnect.\n\n To avoid leaks, all exiting threads with database connections should call this. See odbc_connection_call/2 (thread_at_exit/1)\n\n Note that any exception inside odbc_cleanup_and_disconnect/1 will result in it not going on to the next step.\n\n We log exceptions to the event log because exceptions at this level are associated with the server process crashing\n and the SE log is unlikely to capture anything useful.", "prefix":"odbc_cleanup_and_disconnect" }, "cql/cql_database:odbc_connection_call/3": { "body": [ "odbc_connection_call(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"odbc_connection_call('Param1','Param2','Param3')", "prefix":"odbc_connection_call" }, "cql/cql_database:odbc_execute_with_statement_cache/7": { "body": [ "odbc_execute_with_statement_cache(${1:Connection}, ${2:FileName}, ${3:LineNumber}, ${4:Sql}, ${5:OdbcParameters}, ${6:OdbcParameterDataTypes}, ${7:Row})$8\n$0" ], "description":" odbc_execute_with_statement_cache(+Connection, +FileName, +LineNumber, +Sql, +OdbcParameters, +OdbcParameterDataTypes, -Row)", "prefix":"odbc_execute_with_statement_cache" }, "cql/cql_database:odbc_value_to_application_value/5": { "body": [ "odbc_value_to_application_value(${1:Schema}, ${2:TableSpec}, ${3:ColumnName}, ${4:OdbcValue}, ${5:ApplicationValue})$6\n$0" ], "description":" odbc_value_to_application_value(+Schema, +TableSpec, +ColumnName, +OdbcValue, ?ApplicationValue).", "prefix":"odbc_value_to_application_value" }, "cql/cql_database:register_database_connection_details/2": { "body": [ "register_database_connection_details(${1:Schema}, ${2:ConnectionDetails})$3\n$0" ], "description":" register_database_connection_details(+Schema:atom, +ConnectionDetails) is det.\n\n This should be called once to register the database connection details.\n\n @param ConnectionDetails driver_string(DriverString) or dsn(Dsn, Username, Password)", "prefix":"register_database_connection_details" }, "cql/cql_database:resolve_deadlock/1": { "body": ["resolve_deadlock(${1:Goal})$2\n$0" ], "description":" resolve_deadlock(:Goal)\n\n Call Goal as in catch/3. If a deadlock ('40001') error occurs then Goal is *|called again|* immediately if another transaction has completed in the\n time since Goal was called, since that transaction may well have been the reason for the deadlock.\n If no other transaction has completed Goal is *|called again|* after a random delay of 0.0 to 2.0 seconds. The maximum number of retries\n is specified by maximum_deadlock_retries/1. It is important to note that the deadlock mechanism actually *|retries|* Goal, i.e. it calls it\n *|again|*.\n\n *|Use this only when you are sure Goal has no non-database side effects (assert/retract, file operations etc)|*\n\n Originally developed for use inside cql_transaction/3, resolve_deadlock/1 can also be used to ensure non-transactional\n operations can resolve deadlocks.", "prefix":"resolve_deadlock" }, "cql/cql_database:save_database_event/6": { "body": [ "save_database_event(${1:AccessToken}, ${2:\n%%}, ${3:\n%%}, ${4:\n%%}, ${5:\n%%}, ${6:\n%%})$7\n$0" ], "description":" save_database_event(+AccessToken,\n +EventType,\n +Schema,\n +TableName,\n +PrimaryKeyColumnName,\n +PrimaryKey).\n\n Need this because its called from the caller's module and we want the fact asserted\n in this module", "prefix":"save_database_event" }, "cql/cql_database:transaction_active/0": { "body": ["transaction_active$1\n$0" ], "description":"transaction_active", "prefix":"transaction_active" }, "cql/cql_database:update_history/14": { "body": [ "update_history(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'}, ${6:'Param6'}, ${7:'Param7'}, ${8:'Param8'}, ${9:'Param9'}, ${10:'Param10'}, ${11:'Param11'}, ${12:'Param12'}, ${13:'Param13'}, ${14:'Param14'})$15\n$0" ], "description":"update_history('Param1','Param2','Param3','Param4','Param5','Param6','Param7','Param8','Param9','Param10','Param11','Param12','Param13','Param14')", "prefix":"update_history" }, "cql/cql_hooks:application_value_to_odbc_value_hook/7": { "body": [ "application_value_to_odbc_value_hook(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'}, ${6:'Param6'}, ${7:'Param7'})$8\n$0" ], "description":"application_value_to_odbc_value_hook('Param1','Param2','Param3','Param4','Param5','Param6','Param7')", "prefix":"application_value_to_odbc_value_hook" }, "cql/cql_hooks:cql_update_history_hook/16": { "body": [ "cql_update_history_hook(${1:Schema}, ${2:\n%%}, ${3:\n%%}, ${4:\n%%}, ${5:\n%%}, ${6:\n%%}, ${7:\n%%}, ${8:\n%%}, ${9:\n%%}, ${10:\n%%}, ${11:\n%%}, ${12:\n%%}, ${13:\n%%}, ${14:\n%%}, ${15:\n%%}, ${16:\n%%})$17\n$0" ], "description":" cql_update_history_hook(+Schema,\n +TableName,\n +AttributeName,\n +PrimaryKeyAttributeName,\n +PrimaryKeyValue,\n +ApplicationValueBefore,\n +ApplicationValueAfter,\n +AccessToken,\n +UserId,\n +UserIpAddress,\n +TransactionId,\n +TransactionTimestamp,\n +ThreadId,\n +Spid,\n +Connection,\n +Goal).\n\n Use this hook predicate to actually record database attribute value changes.\n\n You are free to let this predicate fail or raise an exception - the\n database layer will ignore both of these eventualities.\n\n @param Schema \n @param TableName (lower case)\n @param AttributeName (lower case)\n @param PrimaryKeyAttributeName (lower case)\n @param PrimaryKeyValue \n @param ApplicationValueBefore \n @param ApplicationValueAfter \n @param AccessToken \n @param UserId \n @param UserIpAddress \n @param TransactionId \n @param TransactionTimestamp \n @param ThreadId \n @param Spid \n @param Connection \n @param Goal The goal passed to pri_db_trans", "prefix":"cql_update_history_hook" }, "cql/cql_hooks:odbc_value_to_application_value_hook/7": { "body": [ "odbc_value_to_application_value_hook(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'}, ${6:'Param6'}, ${7:'Param7'})$8\n$0" ], "description":"odbc_value_to_application_value_hook('Param1','Param2','Param3','Param4','Param5','Param6','Param7')", "prefix":"odbc_value_to_application_value_hook" }, "cql/sql_keywords:reserved_sql_keyword/1": { "body": ["reserved_sql_keyword(${1:'Param1'})$2\n$0" ], "description":"reserved_sql_keyword('Param1')", "prefix":"reserved_sql_keyword" }, "cql/sql_parser:sql_gripe_level/1": { "body": ["sql_gripe_level(${1:'Param1'})$2\n$0" ], "description":"sql_gripe_level('Param1')", "prefix":"sql_gripe_level" }, "cql/sql_parser:sql_parse/4": { "body": [ "sql_parse(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"sql_parse('Param1','Param2','Param3','Param4')", "prefix":"sql_parse" }, "cql/sql_parser:strip_sql_comments/2": { "body": ["strip_sql_comments(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"strip_sql_comments('Param1','Param2')", "prefix":"strip_sql_comments" }, "cql/sql_tokenizer:sql_tokens/3": { "body": ["sql_tokens(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"sql_tokens('Param1','Param2','Param3')", "prefix":"sql_tokens" }, "cql/sql_write:format_sql_error/3": { "body": [ "format_sql_error(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"format_sql_error('Param1','Param2','Param3')", "prefix":"format_sql_error" }, "cql/sql_write:sql_quote_codes/3": { "body": [ "sql_quote_codes(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"sql_quote_codes('Param1','Param2','Param3')", "prefix":"sql_quote_codes" }, "cql/sql_write:sql_write/3": { "body": ["sql_write(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"sql_write('Param1','Param2','Param3')", "prefix":"sql_write" }, "create_prolog_flag/3": { "body":"create_prolog_flag(${1:Key}, ${2:Value}, ${3:Options})$4\n$0", "description":"[YAP]create_prolog_flag(+Key, +Value, +Options).\nCreate a new Prolog flag. The ISO standard does not foresee creation of new flags, but many libraries introduce new flags. Options is a list of the options below. See also user_flags. access(+Access): Define access rights for the flag. Values are read_write and read_only. The default is read_write.\n\ntype(+Atom): Define a type restriction. Possible values are boolean, atom, integer, float and term. The default is determined from the initial value. Note that term restricts the term to be ground.\n\nkeep(+Boolean): If true, do not modify the flag if it already exists. Otherwise (default), this predicate behaves as set_prolog_flag/2 if the flag already exists.\n\n ", "prefix":"create_prolog_flag" }, "crypt:crypt/2": { "body": ["crypt(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"crypt('Param1','Param2')", "prefix":"crypt" }, "crypto:cert_accept_any/5": { "body":"cert_accept_any(${1:SSL}, ${2:ProblemCertificate}, ${3:AllCertificates}, ${4:FirstCertificate}, ${5:Error})$6\n$0", "description":"[det]cert_accept_any(+SSL, +ProblemCertificate, +AllCertificates, +FirstCertificate, +Error).\nImplementation for the hook `cert_verify_hook(:Hook)` that accepts any certificate. This is intended for http_open/3 if no certificate verification is desired as illustrated below. \n\n http_open('https:/...', In,\n [ cert_verify_hook(cert_accept_any)\n ])\n\n \n\n", "prefix":"cert_accept_any" }, "crypto:crypto_context_hash/2": { "body":"crypto_context_hash(${1:Context}, ${2:Hash})$3\n$0", "description":"crypto_context_hash(+Context, -Hash).\nObtain the hash code of Context. Hash is an atom representing the hash code that is associated with the current state of the computation context Context.", "prefix":"crypto_context_hash" }, "crypto:crypto_context_new/2": { "body":"crypto_context_new(${1:Context}, ${2:Options})$3\n$0", "description":"[det]crypto_context_new(-Context, +Options).\nContext is unified with the empty context, taking into account Options. The context can be used in crypto_data_context/3. For Options, see crypto_data_hash/3. Context is an opaque pure Prolog term that is subject to garbage collection. ", "prefix":"crypto_context_new" }, "crypto:crypto_data_context/3": { "body":"crypto_data_context(${1:Data}, ${2:Context0}, ${3:Context})$4\n$0", "description":"[det]crypto_data_context(+Data, +Context0, -Context).\nContext0 is an existing computation context, and Context is the new context after hashing Data in addition to the previously hashed data. Context0 may be produced by a prior invocation of either crypto_context_new/2 or crypto_data_context/3 itself. This predicate allows a hash to be computed in chunks, which may be important while working with Metalink (RFC 5854), BitTorrent or similar technologies, or simply with big files.\n\n", "prefix":"crypto_data_context" }, "crypto:crypto_data_hash/3": { "body":"crypto_data_hash(${1:Data}, ${2:Hash}, ${3:Options})$4\n$0", "description":"[det]crypto_data_hash(+Data, -Hash, +Options).\nHash is the hash of Data. The conversion is controlled by Options: algorithm(+Algorithm): One of md5, sha1, sha224, sha256 (default), sha384, sha512, blake2s256 or blake2b512. The BLAKE digest algorithms require OpenSSL 1.1.0 or greater.\n\nencoding(+Encoding): If Data is a sequence of character codes, this must be translated into a sequence of bytes, because that is what the hashing requires. The default encoding is utf8. The other meaningful value is octet, claiming that Data contains raw bytes.\n\nhmac(+Key): If this option is specified, a hash-based message authentication code (HMAC) is computed, using the specified Key which is either an atom or string. Any of the available digest algorithms can be used with this option. The cryptographic strength of the HMAC depends on that of the chosen algorithm and also on the key. This option requires OpenSSL 1.1.0 or greater.\n\n Data is either an atom, string or code-list Hash is an atom that represents the hash. See also: hex_bytes/2 for conversion between hashes and lists.\n\n ", "prefix":"crypto_data_hash" }, "crypto:crypto_file_hash/3": { "body":"crypto_file_hash(${1:File}, ${2:Hash}, ${3:Options})$4\n$0", "description":"[det]crypto_file_hash(+File, -Hash, +Options).\nTrue if Hash is the hash of the content of File. For Options, see crypto_data_hash/3.", "prefix":"crypto_file_hash" }, "crypto:crypto_open_hash_stream/3": { "body":"crypto_open_hash_stream(${1:OrgStream}, ${2:HashStream}, ${3:Options})$4\n$0", "description":"[det]crypto_open_hash_stream(+OrgStream, -HashStream, +Options).\nOpen a filter stream on OrgStream that maintains a hash. The hash can be retrieved at any time using crypto_stream_hash/2. Available Options in addition to those of crypto_data_hash/3 are: close_parent(+Bool): If true (default), closing the filter stream also closes the original (parent) stream.\n\n ", "prefix":"crypto_open_hash_stream" }, "crypto:crypto_stream_hash/2": { "body":"crypto_stream_hash(${1:HashStream}, ${2:Hash})$3\n$0", "description":"[det]crypto_stream_hash(+HashStream, -Hash).\nUnify Hash with a hash for the bytes sent to or read from HashStream. Note that the hash is computed on the stream buffers. If the stream is an output stream, it is first flushed and the Digest represents the hash at the current location. If the stream is an input stream the Digest represents the hash of the processed input including the already buffered data.", "prefix":"crypto_stream_hash" }, "crypto:ecdsa_sign/4": { "body":"ecdsa_sign(${1:Key}, ${2:Data}, ${3:Signature}, ${4:Options})$5\n$0", "description":"ecdsa_sign(+Key, +Data, -Signature, +Options).\nCreate an ECDSA signature for Data with EC private key Key. Among the most common cases is signing a hash that was created with crypto_data_hash/3 or other predicates of this library. For this reason, the default encoding (hex) assumes that Data is an atom, string, character list or code list representing the data in hexadecimal notation. See rsa_sign/4 for an example. Options: \n\nencoding(+Encoding): Encoding to use for Data. Default is hex. Alternatives are octet, utf8 and text.\n\n ", "prefix":"ecdsa_sign" }, "crypto:ecdsa_verify/4": { "body":"ecdsa_verify(${1:Key}, ${2:Data}, ${3:Signature}, ${4:Options})$5\n$0", "description":"[semidet]ecdsa_verify(+Key, +Data, +Signature, +Options).\nTrue iff Signature can be verified as the ECDSA signature for Data, using the EC public key Key. Options: \n\nencoding(+Encoding): Encoding to use for Data. Default is hex. Alternatives are octet, utf8 and text.\n\n ", "prefix":"ecdsa_verify" }, "crypto:evp_decrypt/6": { "body":"evp_decrypt(${1:CipherText}, ${2:Algorithm}, ${3:Key}, ${4:IV}, ${5:PlainText}, ${6:Options})$7\n$0", "description":"evp_decrypt(+CipherText, +Algorithm, +Key, +IV, -PlainText, +Options).\nDecrypt the given CipherText, using the symmetric algorithm Algorithm, key Key, and iv IV, to give PlainText. CipherText, Key and IV should all be strings, and PlainText is created as a string as well. Algorithm should be an algorithm which your copy of OpenSSL knows about. Examples are: \n\naes-128-cbc\naes-256-cbc\ndes3\n\n If the IV is not needed for your decryption algorithm (such as aes-128-ecb) then any string can be provided as it will be ignored by the underlying implementation \n\nOptions: \n\nencoding(+Encoding): Encoding to use for Data. Default is utf8. Alternatives are utf8 and octet.\n\npadding(+PaddingScheme): Padding scheme to use. Default is block. You can disable padding by supplying none here.\n\n Example of aes-128-cbc encryption: \n\n\n\n?- evp_encrypt(\"this is some input\", 'aes-128-cbc', \"sixteenbyteofkey\",\n \"sixteenbytesofiv\", CipherText, []),\n evp_decrypt(CipherText, 'aes-128-cbc',\n \"sixteenbyteofkey\", \"sixteenbytesofiv\",\n RecoveredText, []).\nCipherText = \nRecoveredText = \"this is some input\".\n\n ", "prefix":"evp_decrypt" }, "crypto:evp_encrypt/6": { "body":"evp_encrypt(${1:PlainText}, ${2:Algorithm}, ${3:Key}, ${4:IV}, ${5:CipherTExt}, ${6:Options})$7\n$0", "description":"evp_encrypt(+PlainText, +Algorithm, +Key, +IV, -CipherTExt, +Options).\nEncrypt the given PlainText, using the symmetric algorithm Algorithm, key Key, and iv IV, to give CipherText. See evp_decrypt/6.", "prefix":"evp_encrypt" }, "crypto:hex_bytes/2": { "body":"hex_bytes(${1:Hex}, ${2:List})$3\n$0", "description":"[det]hex_bytes(?Hex, ?List).\nRelation between a hexadecimal sequence and a list of bytes. Hex is an atom, string, list of characters or list of codes in hexadecimal encoding. This is the format that is used by crypto_data_hash/3 and related predicates to represent hashes. Bytes is a list of integers between 0 and 255 that represent the sequence as a list of bytes. At least one of the arguments must be instantiated. When converting List to Hex, an atom is used to represent the sequence of hexadecimal digits. Example: \n\n\n\n?- hex_bytes('501ACE', Bs).\nBs = [80, 26, 206].\n\n \n\n", "prefix":"hex_bytes" }, "crypto:rsa_private_decrypt/4": { "body":"rsa_private_decrypt(${1:PrivateKey}, ${2:CipherText}, ${3:PlainText}, ${4:Options})$5\n$0", "description":"[det]rsa_private_decrypt(+PrivateKey, +CipherText, -PlainText, +Options).\n", "prefix":"rsa_private_decrypt" }, "crypto:rsa_private_encrypt/4": { "body":"rsa_private_encrypt(${1:PrivateKey}, ${2:PlainText}, ${3:CipherText}, ${4:Options})$5\n$0", "description":"[det]rsa_private_encrypt(+PrivateKey, +PlainText, -CipherText, +Options).\n", "prefix":"rsa_private_encrypt" }, "crypto:rsa_public_decrypt/4": { "body":"rsa_public_decrypt(${1:PublicKey}, ${2:CipherText}, ${3:PlainText}, ${4:Options})$5\n$0", "description":"[det]rsa_public_decrypt(+PublicKey, +CipherText, -PlainText, +Options).\n", "prefix":"rsa_public_decrypt" }, "crypto:rsa_public_encrypt/4": { "body":"rsa_public_encrypt(${1:PublicKey}, ${2:PlainText}, ${3:CipherText}, ${4:Options})$5\n$0", "description":"[det]rsa_public_encrypt(+PublicKey, +PlainText, -CipherText, +Options).\nRSA Public key encryption and decryption primitives. A string can be safely communicated by first encrypting it and have the peer decrypt it with the matching key and predicate. The length of the string is limited by the key length. Options: \n\nencoding(+Encoding): Encoding to use for Data. Default is utf8. Alternatives are utf8 and octet.\n\npadding(+PaddingScheme): Padding scheme to use. Default is pkcs1. Alternatives are pkcs1_oaep, sslv23 and none. Note that none should only be used if you implement cryptographically sound padding modes in your application code as encrypting unpadded data with RSA is insecure\n\n Errors: ssl_error(Code, LibName, FuncName, Reason) is raised if there is an error, e.g., if the text is too long for the key.\n\nSee also: load_private_key/3, load_public_key/2 can be use to load keys from a file. The predicate load_certificate/2 can be used to obtain the public key from a certificate.\n\n ", "prefix":"rsa_public_encrypt" }, "crypto:rsa_sign/4": { "body":"rsa_sign(${1:Key}, ${2:Data}, ${3:Signature}, ${4:Options})$5\n$0", "description":"[det]rsa_sign(+Key, +Data, -Signature, +Options).\nCreate an RSA signature for Data with private key Key. Options: type(+Type): SHA algorithm used to compute the digest. Values are sha1 (default), sha224, sha256, sha384 or sha512.\n\nencoding(+Encoding): Encoding to use for Data. Default is hex. Alternatives are octet, utf8 and text.\n\n This predicate can be used to compute a sha256WithRSAEncryption signature as follows: \n\n\n\nsha256_with_rsa(PemKeyFile, Password, Data, Signature) :-\n Algorithm = sha256,\n read_key(PemKeyFile, Password, Key),\n crypto_data_hash(Data, Hash, [algorithm(Algorithm),\n encoding(octet)]),\n rsa_sign(Key, Hash, Signature, [type(Algorithm)]).\n\nread_key(File, Password, Key) :-\n setup_call_cleanup(\n open(File, read, In, [type(binary)]),\n load_private_key(In, Password, Key),\n close(In)).\n\n Note that a hash that is computed by crypto_data_hash/3 can be directly used in rsa_sign/4 as well as ecdsa_sign/4.\n\n", "prefix":"rsa_sign" }, "crypto:rsa_verify/4": { "body":"rsa_verify(${1:Key}, ${2:Data}, ${3:Signature}, ${4:Options})$5\n$0", "description":"[semidet]rsa_verify(+Key, +Data, +Signature, +Options).\nVerify an RSA signature for Data with public key Key. Options: \n\ntype(+Type): SHA algorithm used to compute the digest. Values are sha1 (default), sha224, sha256, sha384 or sha512.\n\nencoding(+Encoding): Encoding to use for Data. Default is hex. Alternatives are octet, utf8 and text.\n\n ", "prefix":"rsa_verify" }, "crypto_hash:file_sha1/2": { "body": ["file_sha1(${1:File}, ${2:SHA1})$3\n$0" ], "description":" file_sha1(+File, -SHA1:atom) is det.\n\n True when SHA1 is the SHA1 hash for the content of File. Options\n is passed to open/4 and typically used to control whether binary\n or text encoding must be used. The output is compatible to the\n =sha1sum= program found in many systems.", "prefix":"file_sha1" }, "crypto_hash:hash_atom/2": { "body": ["hash_atom(${1:HashCodes}, ${2:HexAtom})$3\n$0" ], "description":" hash_atom(+HashCodes, -HexAtom) is det.\n\n Convert a list of bytes (integers 0..255) into the usual\n hexadecimal notation. E.g.\n\n ==\n ?- sha_hash('SWI-Prolog', Hash, []),\n hash_atom(Hash, Hex).\n Hash = [61, 128, 252, 38, 121, 69, 229, 85, 199|...],\n Hex = '3d80fc267945e555c730403bd0ab0716e2a68c68'.\n ==", "prefix":"hash_atom" }, "crypto_hash:hmac_sha/4": { "body": ["hmac_sha(${1:Key}, ${2:Data}, ${3:Hash}, ${4:Options})$5\n$0" ], "description":" hmac_sha(+Key, +Data, -Hash, +Options) is det\n\n For Options, see sha_hash/3.", "prefix":"hmac_sha" }, "crypto_hash:sha_hash/3": { "body": ["sha_hash(${1:Data}, ${2:Hash}, ${3:Options})$4\n$0" ], "description":" sha_hash(+Data, -Hash, +Options) is det\n\n Hash is the SHA hash of Data, The conversion is controlled\n by Options:\n\n * algorithm(+Algorithm)\n One of =sha1= (default), =sha224=, =sha256=, =sha384= or\n =sha512=\n * encoding(+Encoding)\n If Data is a sequence of character _codes_, this must be\n translated into a sequence of _bytes_, because that is what\n the hashing requires. The default encoding is =utf8=. The\n other meaningful value is =octet=, claiming that Data contains\n raw bytes.\n\n @param Data is either an atom, string or code-list\n @param Hash is a packed string", "prefix":"sha_hash" }, "crypto_hash:sha_hash_ctx/4": { "body": [ "sha_hash_ctx(${1:OldContext}, ${2:Data}, ${3:NewContext}, ${4:Hash})$5\n$0" ], "description":" sha_hash_ctx(+OldContext, +Data, -NewContext, -Hash) is det\n\n Hash is the SHA hash of Data. NewContext is the new SHA\n computation context, while OldContext is the old. OldContext\n may be produced by a prior invocation of either sha_new_ctx/3 or\n sha_hash_ctx/4 itself.\n\n This predicate allows a SHA function to be computed in chunks,\n which may be important while working with Metalink (RFC 5854),\n BitTorrent or similar technologies, or simply with big files.", "prefix":"sha_hash_ctx" }, "crypto_hash:sha_new_ctx/2": { "body": ["sha_new_ctx(${1:NewContext}, ${2:Options})$3\n$0" ], "description":" sha_new_ctx(-NewContext, +Options) is det\n\n NewContext is unified with the empty SHA computation context\n (which includes the Options.) It could later be passed to\n sha_hash_ctx/4. For Options, see sha_hash/3.\n\n @param NewContext is an opaque pure Prolog term that is\n subject to garbage collection.", "prefix":"sha_new_ctx" }, "csv:csv/1": { "body":"csv(${1:Rows})$2\n$0", "description":"[det]csv(?Rows)//.\n", "prefix":"csv" }, "csv:csv/2": { "body":"csv(${1:Rows}, ${2:Options})$3\n$0", "description":"[det]csv(?Rows, +Options)//.\nProlog DCG to `read/write' CSV data. Options: separator(+Code): The comma-separator. Must be a character code. Default is (of course) the comma. Character codes can be specified using the 0' notion. E.g., using separator(0';) parses a semicolon separated file.\n\nignore_quotes(+Boolean): If true (default false), threat double quotes as a normal character.\n\nstrip(+Boolean): If true (default false), strip leading and trailing blank space. RFC4180 says that blank space is part of the data.\n\nconvert(+Boolean): If true (default), use name/2 on the field data. This translates the field into a number if possible.\n\nfunctor(+Atom): Functor to use for creating row terms. Default is row.\n\narity(?Arity): Number of fields in each row. This predicate raises a domain_error(row_arity(Expected), Found) if a row is found with different arity.\n\nmatch_arity(+Boolean): If false (default true), do not reject CSV files where lines provide a varying number of fields (columns). This can be a work-around to use some incorrect CSV files.\n\n ", "prefix":"csv" }, "csv:csv_read_file/2": { "body":"csv_read_file(${1:File}, ${2:Rows})$3\n$0", "description":"[det]csv_read_file(+File, -Rows).\n", "prefix":"csv_read_file" }, "csv:csv_read_file/3": { "body":"csv_read_file(${1:File}, ${2:Rows}, ${3:Options})$4\n$0", "description":"[det]csv_read_file(+File, -Rows, +Options).\nRead a CSV file into a list of rows. Each row is a Prolog term with the same arity. Options is handed to csv/4. Remaining options are processed by phrase_from_file/3. The default separator depends on the file name extension and is \\t for .tsv files and , otherwise. Suppose we want to create a predicate table/6 from a CSV file that we know contains 6 fields per record. This can be done using the code below. Without the option arity(6), this would generate a predicate table/N, where N is the number of fields per record in the data. \n\n\n\n?- csv_read_file(File, Rows, [functor(table), arity(6)]),\n maplist(assert, Rows).\n\n ", "prefix":"csv_read_file" }, "csv:csv_read_file_row/3": { "body":"csv_read_file_row(${1:File}, ${2:Row}, ${3:Options})$4\n$0", "description":"[nondet]csv_read_file_row(+File, -Row, +Options).\nTrue when Row is a row in File. First unifies Row with the first row in File. Backtracking yields the second, ... row. This interface is an alternative to csv_read_file/3 that avoids loading all rows in memory. Note that this interface does not guarantee that all rows in File have the same arity. In addition to the options of csv_read_file/3, this predicate processes the option: \n\nline(-Line): Line is unified with the 1-based line-number from which Row is read. Note that Line is not the physical line, but rather the logical record number.\n\n To be done: Input is read line by line. If a record separator is embedded in a quoted field, parsing the record fails and another line is added to the input. This does not nicely deal with other reasons why parsing the row may fail.\n\n ", "prefix":"csv_read_file_row" }, "csv:csv_write_file/2": { "body":"csv_write_file(${1:File}, ${2:Data})$3\n$0", "description":"[det]csv_write_file(+File, +Data).\n", "prefix":"csv_write_file" }, "csv:csv_write_file/3": { "body":"csv_write_file(${1:File}, ${2:Data}, ${3:Options})$4\n$0", "description":"[det]csv_write_file(+File, +Data, +Options).\nWrite a list of Prolog terms to a CSV file. Options are given to csv/4. Remaining options are given to open/4. The default separator depends on the file name extension and is \\t for .tsv files and , otherwise.", "prefix":"csv_write_file" }, "csv:csv_write_stream/3": { "body":"csv_write_stream(${1:Stream}, ${2:Data}, ${3:Options})$4\n$0", "description":"[det]csv_write_stream(+Stream, +Data, +Options).\nWrite the rows in Data to Stream. This is similar to csv_write_file/3, but can deal with data that is produced incrementally. The example below saves all answers from the predicate data/3 to File. \n\nsave_data(File) :-\n setup_call_cleanup(\n open(File, write, Out),\n forall(data(C1,C2,C3),\n csv_write_stream(Out, [row(C1,C2,C3)], [])),\n close(Out)),\n\n \n\n", "prefix":"csv_write_stream" }, "ctypes:is_alnum/1": { "body": ["is_alnum(${1:'Param1'})$2\n$0" ], "description":"is_alnum('Param1')", "prefix":"is_alnum" }, "ctypes:is_alpha/1": { "body": ["is_alpha(${1:'Param1'})$2\n$0" ], "description":"is_alpha('Param1')", "prefix":"is_alpha" }, "ctypes:is_ascii/1": { "body": ["is_ascii(${1:'Param1'})$2\n$0" ], "description":"is_ascii('Param1')", "prefix":"is_ascii" }, "ctypes:is_cntrl/1": { "body": ["is_cntrl(${1:'Param1'})$2\n$0" ], "description":"is_cntrl('Param1')", "prefix":"is_cntrl" }, "ctypes:is_csym/1": { "body": ["is_csym(${1:'Param1'})$2\n$0" ], "description":"is_csym('Param1')", "prefix":"is_csym" }, "ctypes:is_csymf/1": { "body": ["is_csymf(${1:'Param1'})$2\n$0" ], "description":"is_csymf('Param1')", "prefix":"is_csymf" }, "ctypes:is_digit/1": { "body": ["is_digit(${1:'Param1'})$2\n$0" ], "description":"is_digit('Param1')", "prefix":"is_digit" }, "ctypes:is_digit/3": { "body": ["is_digit(${1:C}, ${2:Base}, ${3:Weight})$4\n$0" ], "description":" is_digit(+C, +Base, -Weight) is det.\n is_digit(-C, +Base, +Weight) is det.\n\n Succeeds if `C' is a digit using `Base' as base and `Weight'\n represents its value. Only the base-10 case is handled by code_type.", "prefix":"is_digit" }, "ctypes:is_endfile/1": { "body": ["is_endfile(${1:'Param1'})$2\n$0" ], "description":"is_endfile('Param1')", "prefix":"is_endfile" }, "ctypes:is_endline/1": { "body": ["is_endline(${1:'Param1'})$2\n$0" ], "description":"is_endline('Param1')", "prefix":"is_endline" }, "ctypes:is_graph/1": { "body": ["is_graph(${1:'Param1'})$2\n$0" ], "description":"is_graph('Param1')", "prefix":"is_graph" }, "ctypes:is_lower/1": { "body": ["is_lower(${1:'Param1'})$2\n$0" ], "description":"is_lower('Param1')", "prefix":"is_lower" }, "ctypes:is_newline/1": { "body": ["is_newline(${1:'Param1'})$2\n$0" ], "description":"is_newline('Param1')", "prefix":"is_newline" }, "ctypes:is_newpage/1": { "body": ["is_newpage(${1:'Param1'})$2\n$0" ], "description":"is_newpage('Param1')", "prefix":"is_newpage" }, "ctypes:is_paren/2": { "body": ["is_paren(${1:Open}, ${2:Close})$3\n$0" ], "description":" is_paren(?Open, ?Close) is semidet.\n\n True if Open is the open-parenthesis of Close.", "prefix":"is_paren" }, "ctypes:is_period/1": { "body": ["is_period(${1:'Param1'})$2\n$0" ], "description":"is_period('Param1')", "prefix":"is_period" }, "ctypes:is_print/1": { "body": ["is_print(${1:'Param1'})$2\n$0" ], "description":"is_print('Param1')", "prefix":"is_print" }, "ctypes:is_punct/1": { "body": ["is_punct(${1:'Param1'})$2\n$0" ], "description":"is_punct('Param1')", "prefix":"is_punct" }, "ctypes:is_quote/1": { "body": ["is_quote(${1:'Param1'})$2\n$0" ], "description":"is_quote('Param1')", "prefix":"is_quote" }, "ctypes:is_space/1": { "body": ["is_space(${1:'Param1'})$2\n$0" ], "description":"is_space('Param1')", "prefix":"is_space" }, "ctypes:is_upper/1": { "body": ["is_upper(${1:'Param1'})$2\n$0" ], "description":"is_upper('Param1')", "prefix":"is_upper" }, "ctypes:is_white/1": { "body": ["is_white(${1:'Param1'})$2\n$0" ], "description":"is_white('Param1')", "prefix":"is_white" }, "ctypes:to_lower/2": { "body": ["to_lower(${1:U}, ${2:L})$3\n$0" ], "description":" to_lower(+U, -L) is det.\n\n Downcase a character code. If U is the character code of an\n uppercase character, unify L with the character code of the\n lowercase version. Else unify L with U.", "prefix":"to_lower" }, "ctypes:to_upper/2": { "body": ["to_upper(${1:L}, ${2:U})$3\n$0" ], "description":" to_upper(+L, -U) is det.\n\n Upcase a character code. If L is the character code of a\n lowercase character, unify L with the character code of the\n uppercase version. Else unify U with L.", "prefix":"to_upper" }, "ctypes:upper_lower/2": { "body": ["upper_lower(${1:U}, ${2:L})$3\n$0" ], "description":" upper_lower(?U, ?L) is det.\n\n True when U is the character code of an uppercase character and\n L is the character code of the corresponding lowercase\n character.", "prefix":"upper_lower" }, "current_arithmetic_function/1": { "body":"current_arithmetic_function(${1:Head})$2\n$0", "description":"current_arithmetic_function(?Head).\nTrue when Head is an evaluable function. For example: \n\n?- current_arithmetic_function(sin(_)).\ntrue.\n\n \n\n", "prefix":"current_arithmetic_function" }, "current_atom/1": { "body":"current_atom(${1:Atom})$2\n$0", "description":"current_atom(-Atom).\nSuccessively unifies Atom with all atoms known to the system. Note that current_atom/1 always succeeds if Atom is instantiated to an atom.", "prefix":"current_atom" }, "current_blob/2": { "body":"current_blob(${1:Blob}, ${2:Type})$3\n$0", "description":"current_blob(?Blob, ?Type).\nExamine the type or enumerate blobs of the given Type. Typed blobs are supported through the foreign language interface for storing arbitrary BLOBs (Binary Large Object) or handles to external entities. See section 11.4.7 for details.", "prefix":"current_blob" }, "current_char_conversion/2": { "body":"current_char_conversion(${1:CharIn}, ${2:CharOut})$3\n$0", "description":"[ISO]current_char_conversion(?CharIn, ?CharOut).\nQueries the current character conversion table. See char_conversion/2 for details.", "prefix":"current_char_conversion" }, "current_engine/1": { "body":"current_engine(${1:Engine})$2\n$0", "description":"[nondet]current_engine(-Engine).\nTrue when Engine is an existing engine.", "prefix":"current_engine" }, "current_flag/1": { "body":"current_flag(${1:FlagKey})$2\n$0", "description":"current_flag(-FlagKey).\nSuccessively unifies FlagKey with all keys used for flags (see flag/3).", "prefix":"current_flag" }, "current_format_predicate/2": { "body":"current_format_predicate(${1:Code}, ${2:Head})$3\n$0", "description":"current_format_predicate(?Code, ?:Head).\nTrue when ~Code is handled by the user-defined predicate specified by Head.", "prefix":"current_format_predicate" }, "current_functor/2": { "body":"current_functor(${1:Name}, ${2:Arity})$3\n$0", "description":"current_functor(?Name, ?Arity).\nSuccessively unifies Name with the name and Arity with the arity of functors known to the system.", "prefix":"current_functor" }, "current_input/1": { "body":"current_input(${1:Stream})$2\n$0", "description":"[ISO]current_input(-Stream).\nGet the current input stream. Useful for getting access to the status predicates associated with streams.", "prefix":"current_input" }, "current_key/1": { "body":"current_key(${1:Key})$2\n$0", "description":"current_key(-Key).\nSuccessively unifies Key with all keys used for records (see recorda/3, etc.).", "prefix":"current_key" }, "current_locale/1": { "body":"current_locale(${1:Locale})$2\n$0", "description":"current_locale(-Locale).\nTrue when Locale is the locale of the calling thread.", "prefix":"current_locale" }, "current_module/1": { "body":"current_module(${1:Module})$2\n$0", "description":"[nondet]current_module(?Module).\nTrue if Module is a currently defined module. This predicate enumerates all modules, whether loaded from a file or created dynamically. Note that modules cannot be destroyed in the current version of SWI-Prolog.", "prefix":"current_module" }, "current_op/3": { "body":"current_op(${1:Precedence}, ${2:Type}, ${3:Name})$4\n$0", "description":"[ISO]current_op(?Precedence, ?Type, ?:Name).\nTrue if Name is currently defined as an operator of type Type with precedence Precedence. See also op/3.", "prefix":"current_op" }, "current_output/1": { "body":"current_output(${1:Stream})$2\n$0", "description":"[ISO]current_output(-Stream).\nGet the current output stream.", "prefix":"current_output" }, "current_predicate/1": { "body":"current_predicate(${1:PredicateIndicator})$2\n$0", "description":"[ISO]current_predicate(:PredicateIndicator).\nTrue if PredicateIndicator is a currently defined predicate. A predicate is considered defined if it exists in the specified module, is imported into the module or is defined in one of the modules from which the predicate will be imported if it is called (see section 6.9). Note that current_predicate/1 does not succeed for predicates that can be autoloaded. See also current_predicate/2 and predicate_property/2. If PredicateIndicator is not fully specified, the predicate only generates values that are defined in or already imported into the target module. Generating all callable predicates therefore requires enumerating modules using current_module/1. Generating predicates callable in a given module requires enumerating the import modules using import_module/2 and the autoloadable predicates using the predicate_property/2 autoload.\n\n", "prefix":"current_predicate" }, "current_predicate/2": { "body":"current_predicate(${1:Name}, ${2:Head})$3\n$0", "description":"current_predicate(?Name, :Head).\nClassical pre-ISO implementation of current_predicate/1, where the predicate is represented by the head term. The advantage is that this can be used for checking the existence of a predicate before calling it without the need for functor/3: \n\ncall_if_exists(G) :-\n current_predicate(_, G),\n call(G).\n\n Because of this intended usage, current_predicate/2 also succeeds if the predicate can be autoloaded. Unfortunately, checking the autoloader makes this predicate relatively slow, in particular because a failed lookup of the autoloader will cause the autoloader to verify that its index is up-to-date.\n\n", "prefix":"current_predicate" }, "current_prolog_flag/2": { "body":"current_prolog_flag(${1:Key}, ${2:Value})$3\n$0", "description":"[ISO]current_prolog_flag(?Key, -Value).\nThe predicate current_prolog_flag/2 defines an interface to installation features: options compiled in, version, home, etc. With both arguments unbound, it will generate all defined Prolog flags. With Key instantiated, it unifies Value with the value of the Prolog flag or fails if the Key is not a Prolog flag. Flags marked rw can be modified by the user using set_prolog_flag/2. Flag values are typed. Flags marked as bool can have the values true or false. The predicate create_prolog_flag/3 may be used to create flags that describe or control behaviour of libraries and applications. The library library(settings) provides an alternative interface for managing notably application parameters. \n\nSome Prolog flags are not defined in all versions, which is normally indicated in the documentation below as ``if present and true''. A boolean Prolog flag is true iff the Prolog flag is present and the Value is the atom true. Tests for such flags should be written as below: \n\n\n\n ( current_prolog_flag(windows, true)\n -> \n ; \n )\n\n Some Prolog flags are scoped to a source file. This implies that if they are set using a directive inside a file, the flag value encountered when loading of the file started is restored when loading of the file is completed. Currently, the following flags are scoped to the source file: generate_debug_info and optimise. \n\nA new thread (see section 9) copies all flags from the thread that created the new thread (its parent).15This is implemented using the copy-on-write tecnhnique. As a consequence, modifying a flag inside a thread does not affect other threads. \n\naccess_level(atom, changeable): This flag defines a normal `user' view (user, default) or a `system' view. In system view all system code is fully accessible as if it was normal user code. In user view, certain operations are not permitted and some details are kept invisible. We leave the exact consequences undefined, but, for example, system code can be traced using system access and system predicates can be redefined.\n\naddress_bits(integer): Address size of the hosting machine. Typically 32 or 64. Except for the maximum stack limit, this has few implications to the user. See also the Prolog flag arch.\n\nagc_margin(integer, changeable): If this amount of atoms possible garbage atoms exist perform atom garbage collection at the first opportunity. Initial value is 10,000. May be changed. A value of 0 (zero) disables atom garbage collection. See also PL_register_atom().16Given that SWI-Prolog has no limit on the length of atoms, 10,000 atoms may still occupy a lot of memory. Applications using extremely large atoms may wish to call garbage_collect_atoms/0 explicitly or lower the margin.\n\napple(bool): If present and true, the operating system is MacOSX. Defined if the C compiler used to compile this version of SWI-Prolog defines __APPLE__. Note that the unix is also defined for MacOSX.\n\nallow_dot_in_atom(bool, changeable): If true (default false), dots may be embedded into atoms that are not quoted and start with a letter. The embedded dot must be followed by an identifier continuation character (i.e., letter, digit or underscore). The dot is allowed in identifiers in many languages, which can make this a useful flag for defining DSLs. Note that this conflicts with cascading functional notation. For example, Post.meta.author is read as .(Post, 'meta.author' if this flag is set to true.\n\nallow_variable_name_as_functor(bool, changeable): If true (default is false), Functor(arg) is read as if it were written 'Functor'(arg). Some applications use the Prolog read/1 predicate for reading an application-defined script language. In these cases, it is often difficult to explain to non-Prolog users of the application that constants and functions can only start with a lowercase letter. Variables can be turned into atoms starting with an uppercase atom by calling read_term/2 using the option variable_names and binding the variables to their name. Using this feature, F(x) can be turned into valid syntax for such script languages. Suggested by Robert van Engelen. SWI-Prolog specific.\n\nargv(list, changeable): List is a list of atoms representing the application command line arguments. Application command line arguments are those that have not been processed by Prolog during its initialization. Note that Prolog's argument processing stops at -- or the first non-option argument. See also os_argv.17Prior to version 6.5.2, argv was defined as os_argv is now. The change was made for compatibility reasone and because the current definition is more practical.\n\narch(atom): Identifier for the hardware and operating system SWI-Prolog is running on. Used to select foreign files for the right architecture. See also section 11.2.3 and file_search_path/2.\n\nassociated_file(atom): Set if Prolog was started with a prolog file as argument. Used by e.g., edit/0 to edit the initial file.\n\nautoload(bool, changeable): If true (default) autoloading of library functions is enabled.\n\nback_quotes(codes,chars,string,symbol_char, changeable): Defines the term-representation for back-quoted material. The default is codes. If --traditional is given, the default is symbol_char, which allows using ` in operators composed of symbols.18Older versions had a boolean flag backquoted_strings, which toggled between string and symbol_char. See also section 5.2.\n\nbacktrace(bool, changeable): If true (default), print a backtrace on an uncaught exception.\n\nbacktrace_depth(integer, changeable): If backtraces on errors are enabled, this flag defines the maximum number of frames that is printed (default 20).\n\nbacktrace_goal_depth(integer, changeable): The frame of a backtrace is printed after making a shallow copy of the goal. This flag determines the depth to which the goal term is copied. Default is `3'.\n\nbacktrace_show_lines(bool, changeable): If true (default), try to reconstruct the line number at which the exception happened.\n\nbounded(bool): ISO Prolog flag. If true, integer representation is bound by min_integer and max_integer. If false integers can be arbitrarily large and the min_integer and max_integer are not present. See section 4.27.2.1.\n\nbreak_level(integer): Current break-level. The initial top level (started with -t) has value 0. See break/0. This flag is absent from threads that are not running a top-level loop.\n\nc_cc(atom, changeable): Name of the C compiler used to compile SWI-Prolog. Normally either gcc or cc. See section 11.5.\n\nc_cflags(atom, changeable): CFLAGS used to compile SWI-Prolog. See section 11.5.\n\nc_ldflags(atom, changeable): LDFLAGS used to link SWI-Prolog. See section 11.5.\n\nc_libs(atom, changeable): Libraries needed to link executables that embed SWI-Prolog. Typically -lswipl if the SWI-Prolog kernel is a shared (DLL). If the SWI-Prolog kernel is in a static library, this flag also contains the dependencies.\n\nc_libplso(atom, changeable): Libraries needed to link extensions (shared object, DLL) to SWI-Prolog. Typically empty on ELF systems and -lswipl on COFF-based systems. See section 11.5.\n\nchar_conversion(bool, changeable): Determines whether character conversion takes place while reading terms. See also char_conversion/2.\n\ncharacter_escapes(bool, changeable): If true (default), read/1 interprets \\ escape sequences in quoted atoms and strings. May be changed. This flag is local to the module in which it is changed. See section 2.15.1.3.\n\ncolon_sets_calling_context(bool, changeable): Using the construct : sets the calling context for executing . This flag is defined by ISO/IEC 13211-2 (Prolog modules standard). See section 6.\n\ncolor_term(bool, changeable): This flag is managed by library library(ansi_term), which is loaded at startup if the two conditions below are both true. Note that this implies that setting this flag to false from the system or personal initialization file (see section 2.2 disables colored output. The predicate message_property/2 can be used to control the actual color scheme depending in the message type passed to print_message/2. stream_property(current_output, tty(true))\n\\+ current_prolog_flag(color_term, false)\n\n\n\ncompile_meta_arguments(atom, changeable): Experimental flag that controls compilation of arguments passed to meta-calls marked `0' or `^' (see meta_predicate/1). Supported values are: false(default). Meta-arguments are passed verbatim.controlCompile meta-arguments that contain control structures ((A,B), (A;B), (A->B;C), etc.). If not compiled at compile time, such arguments are compiled to a temporary clause before execution. Using this option enhances performance of processing complex meta-goals that are known at compile time.trueAlso compile references to normal user predicates. This harms performance (a little), but enhances the power of poor-mens consistency check used by make/0 and implemented by list_undefined/0.alwaysAlways create an intermediate clause, even for system predicates. This prepares for replacing the normal head of the generated predicate with a special reference (similar to database references as used by, e.g., assert/2) that provides direct access to the executable code, thus avoiding runtime lookup of predicates for meta-calling. \n\ncompiled_at(atom): Describes when the system has been compiled. Only available if the C compiler used to compile SWI-Prolog provides the __DATE__ and __TIME__ macros.\n\nconsole_menu(bool): Set to true in swipl-win.exe to indicate that the console supports menus. See also section 4.35.3.\n\ncpu_count(integer, changeable): Number of physical CPUs or cores in the system. The flag is marked read-write both to allow pretending the system has more or less processors. See also thread_setconcurrency/2 and the library library(thread). This flag is not available on systems where we do not know how to get the number of CPUs. This flag is not included in a saved state (see qsave_program/1).\n\ndde(bool): Set to true if this instance of Prolog supports DDE as described in section 4.43.\n\ndebug(bool, changeable): Switch debugging mode on/off. If debug mode is activated the system traps encountered spy points (see spy/1) and trace points (see trace/1). In addition, last-call optimisation is disabled and the system is more conservative in destroying choice points to simplify debugging. Disabling these optimisations can cause the system to run out of memory on programs that behave correctly if debug mode is off.\n\ndebug_on_error(bool, changeable): If true, start the tracer after an error is detected. Otherwise just continue execution. The goal that raised the error will normally fail. See also the Prolog flag report_error. Default is true.\n\ndebugger_write_options(term, changeable): This argument is given as option-list to write_term/2 for printing goals by the debugger. Modified by the `w', `p' and ` d' commands of the debugger. Default is [quoted(true), portray(true), max_depth(10), attributes(portray)].\n\ndebugger_show_context(bool, changeable): If true, show the context module while printing a stack-frame in the tracer. Normally controlled using the `C' option of the tracer.\n\ndialect(atom): Fixed to swi. The code below is a reliable and portable way to detect SWI-Prolog. \n\nis_dialect(swi) :-\n catch(current_prolog_flag(dialect, swi), _, fail).\n\n \n\ndouble_quotes(codes,chars,atom,string, changeable): This flag determines how double quoted strings are read by Prolog and is ---like character_escapes and back_quotes--- maintained for each module. The default is string, which produces a string as described in section 5.2. If --traditional is given, the default is codes, which produces a list of character codes, integers that represent a Unicode code-point. The value chars produces a list of one-character atoms and the value atom makes double quotes the same as single quotes, creating a atom. See also section 5.\n\neditor(atom, changeable): Determines the editor used by edit/1. See section 4.4.1 for details on selecting the editor used.\n\nemacs_inferior_process(bool): If true, SWI-Prolog is running as an inferior process of (GNU/X-)Emacs. SWI-Prolog assumes this is the case if the environment variable EMACS is t and INFERIOR is yes.\n\nencoding(atom, changeable): Default encoding used for opening files in text mode. The initial value is deduced from the environment. See section 2.18.1 for details.\n\nexecutable(atom): Pathname of the running executable. Used by qsave_program/2 as default emulator.\n\nexit_status(integer): Set by halt/1 to its argument, making the exit status available to hooks registered with at_halt/1.\n\nfile_name_case_handling(atom, changeable): This flag defines how Prolog handles the case of file names. The flag is used for case normalization and to determine whether two names refer to the same file.bugNote that file name case handling is typically a properly of the filesystem, while Prolog only has a global flag to determine its file handling. It has one of the following values: case_sensitiveThe filesystem is fully case sensitive. Prolog does not perform any case modification or case insensitive matching. This is the default on Unix systems.case_preservingThe filesystem is case insensitive, but it preserves the case with which the user jas created a file. This is the default on Windows systems.case_insensitiveThe filesystem doesn't store or match case. In this scenario Prolog maps all file names to lower case. \n\nfile_name_variables(bool, changeable): If true (default false), expand $varname and ~ in arguments of built-in predicates that accept a file name (open/3, exists_file/1, access_file/2, etc.). The predicate expand_file_name/2 can be used to expand environment variables and wildcard patterns. This Prolog flag is intended for backward compatibility with older versions of SWI-Prolog.\n\nfile_search_cache_time(number, changeable): Time in seconds for which search results from absolute_file_name/3 are cached. Within this time limit, the system will first check that the old search result satisfies the conditions. Default is 10 seconds, which typically avoids most repetitive searches for (library) files during compilation. Setting this value to 0 (zero) disables the cache.\n\ngc(bool, changeable): If true (default), the garbage collector is active. If false, neither garbage collection, nor stack shifts will take place, even not on explicit request. May be changed.\n\ngenerate_debug_info(bool, changeable): If true (default) generate code that can be debugged using trace/0, spy/1, etc. Can be set to false using the -nodebug. This flag is scoped within a source file. Many of the libraries have :- set_prolog_flag(generate_debug_info, false) to hide their details from a normal trace.19In the current implementation this only causes a flag to be set on the predicate that causes children to be hidden from the debugger. The name anticipates further changes to the compiler.\n\ngmp_version(integer): If Prolog is linked with GMP, this flag gives the major version of the GMP library used. See also section 11.4.8.\n\ngui(bool): Set to true if XPCE is around and can be used for graphics.\n\nhistory(integer, changeable): If integer> 0, support Unix csh(1)-like history as described in section 2.7. Otherwise, only support reusing commands through the command line editor. The default is to set this Prolog flag to 0 if a command line editor is provided (see Prolog flag readline) and 15 otherwise.\n\nhome(atom): SWI-Prolog's notion of the home directory. SWI-Prolog uses its home directory to find its startup file as /boot32.prc (32-bit machines) or /boot64.prc (64-bit machines) and to find its library as /library.\n\nhwnd(integer): In swipl-win.exe, this refers to the MS-Windows window handle of the console window.\n\ninteger_rounding_function(down,toward_zero): ISO Prolog flag describing rounding by // and rem arithmetic functions. Value depends on the C compiler used.\n\niso(bool, changeable): Include some weird ISO compatibility that is incompatible with normal SWI-Prolog behaviour. Currently it has the following effect: The //2 (float division) always returns a float, even if applied to integers that can be divided.\nIn the standard order of terms (see section 4.7.1), all floats are before all integers.\natom_length/2 yields a type error if the first argument is a number.\nclause/[2,3] raises a permission error when accessing static predicates.\nabolish/[1,2] raises a permission error when accessing static predicates.\nSyntax is closer to the ISO standard: Unquoted commas and bars appearing as atoms are not allowed. Instead of f(,,a) now write f(',',a). Unquoted commas can only be used to separate arguments in functional notation and list notation, and as a conjunction operator. Unquoted bars can only appear within lists to separate head and tail, like [Head|Tail], and as infix operator for alternation in grammar rules, like a --> b | c.Within functional notation and list notation terms must have priority below 1000. That means that rules and control constructs appearing as arguments need bracketing. A term like [a :- b, c]. must now be disambiguated to mean [(a :- b), c]. or [(a :- b, c)].Operators appearing as operands must be bracketed. Instead of X == -, true. write X == (-), true. Currently, this is not entirely enforced.Backslash-escaped newlines are interpreted according to the ISO standard. See section 2.15.1.3.\n\n\n\nlarge_files(bool): If present and true, SWI-Prolog has been compiled with large file support (LFS) and is capable of accessing files larger than 2GB on 32-bit hardware. Large file support is default on installations built using configure that support it and may be switched off using the configure option --disable-largefile.\n\nlast_call_optimisation(bool, changeable): Determines whether or not last-call optimisation is enabled. Normally the value of this flag is the negation of the debug flag. As programs may run out of stack if last-call optimisation is omitted, it is sometimes necessary to enable it during debugging.\n\nmax_arity(unbounded): ISO Prolog flag describing there is no maximum arity to compound terms.\n\nmax_integer(integer): Maximum integer value if integers are bounded. See also the flag bounded and section 4.27.2.1.\n\nmax_tagged_integer(integer): Maximum integer value represented as a `tagged' value. Tagged integers require one word storage. Larger integers are represented as `indirect data' and require significantly more space.\n\nmin_integer(integer): Minimum integer value if integers are bounded. See also the flag bounded and section 4.27.2.1.\n\nmin_tagged_integer(integer): Start of the tagged-integer value range.\n\noccurs_check(atom, changeable): This flag controls unification that creates an infinite tree (also called cyclic term) and can have three values. Using false (default), unification succeeds, creating an infinite tree. Using true, unification behaves as unify_with_occurs_check/2, failing silently. Using error, an attempt to create a cyclic term results in an occurs_check exception. The latter is intended for debugging unintentional creations of cyclic terms. Note that this flag is a global flag modifying fundamental behaviour of Prolog. Changing the flag from its default may cause libraries to stop functioning properly.\n\nopen_shared_object(bool): If true, open_shared_object/2 and friends are implemented, providing access to shared libraries (.so files) or dynamic link libraries (.DLL files).\n\noptimise(bool, changeable): If true, compile in optimised mode. The initial value is true if Prolog was started with the -O command line option. The optimise flag is scoped to a source file. Currently optimised compilation implies compilation of arithmetic, and deletion of redundant true/0 that may result from expand_goal/2. Later versions might imply various other optimisations such as integrating small predicates into their callers, eliminating constant expressions and other predictable constructs. Source code optimisation is never applied to predicates that are declared dynamic (see dynamic/1).\n\nos_argv(list, changeable): List is a list of atoms representing the command line arguments used to invoke SWI-Prolog. Please note that all arguments are included in the list returned. See argv to get the application options.\n\npid(int): Process identifier of the running Prolog process. Existence of this flag is implementation-defined.\n\npipe(bool, changeable): If true, open(pipe(command), mode, Stream), etc. are supported. Can be changed to disable the use of pipes in applications testing this feature. Not recommended.\n\nprint_write_options(term, changeable): Specifies the options for write_term/2 used by print/1 and print/2.\n\nprompt_alternatives_on(atom, changeable): Determines prompting for alternatives in the Prolog top level. Default is determinism, which implies the system prompts for alternatives if the goal succeeded while leaving choice points. Many classical Prolog systems behave as groundness: they prompt for alternatives if and only if the query contains variables.\n\nprotect_static_code(bool, changeable): If true (default false), clause/2 does not operate on static code, providing some basic protection from hackers that wish to list the static code of your Prolog program. Once the flag is true, it cannot be changed back to false. Protection is default in ISO mode (see Prolog flag iso). Note that many parts of the development environment require clause/2 to work on static code, and enabling this flag should thus only be used for production code.\n\nqcompile(atom, changeable): This option provides the default for the qcompile(+Atom) option of load_files/2.\n\nreadline(atom, changeable): Specifies which form of command line editing is provided. Possible values are below. The flag may be set from the user's init file (see section 2.3) to one of false, readline or editline. This causes the toplevel not to load a command line editor (false) or load the specified one. If loading fails the flag is set to false. falseNo command line editing is available.readlineThe library library(readline) is loaded, providing line editing based on the GNU readline library.editlineThe library library(editline) is loaded, providing line editing based on the BSD libedit. This is the default if library(editline) is available and can be loaded.swipl_winSWI-Prolog uses its own console (swipl-win.exe on Windows, the Qt based swipl-win on MacOS) which provides line editing. \n\nresource_database(atom): Set to the absolute filename of the attached state. Typically this is the file boot32.prc, the file specified with -x or the running executable. See also resource/3.\n\nreport_error(bool, changeable): If true, print error messages; otherwise suppress them. May be changed. See also the debug_on_error Prolog flag. Default is true, except for the runtime version.\n\nruntime(bool): If present and true, SWI-Prolog is compiled with -DO_RUNTIME, disabling various useful development features (currently the tracer and profiler).\n\nsandboxed_load(bool, changeable): If true (default false), load_files/2 calls hooks to allow library(sandbox) to verify the safety of directives.\n\nsaved_program(bool): If present and true, Prolog has been started from a state saved with qsave_program/[1,2].\n\nshared_object_extension(atom): Extension used by the operating system for shared objects. .so for most Unix systems and .dll for Windows. Used for locating files using the file_type executable. See also absolute_file_name/3.\n\nshared_object_search_path(atom): Name of the environment variable used by the system to search for shared objects.\n\nsignals(bool): Determine whether Prolog is handling signals (software interrupts). This flag is false if the hosting OS does not support signal handling or the command line option -nosignals is active. See section 11.4.21.1 for details.\n\nstream_type_check(atom, changeable): Defines whether and how strictly the system validates that byte I/O should not be applied to text streams and text I/O should not be applied to binary streams. Values are false (no checking), true (full checking) and loose. Using checking mode loose (default), the system accepts byte I/O from text stream that use ISO Latin-1 encoding and accepts writing text to binary streams.\n\nsystem_thread_id(int): Available in multithreaded version (see section 9) where the operating system provides system-wide integer thread identifiers. The integer is the thread identifier used by the operating system for the calling thread. See also thread_self/1.\n\ntable_space(integer, changeable): Space reserved for storing answer tables for tabled predicates (see table/1).bugCurrently only counts the space occupied by the nodes in the answer tries. When exceeded a resource_error(table_space) exception is raised.\n\ntimezone(integer): Offset in seconds west of GMT of the current time zone. Set at initialization time from the timezone variable associated with the POSIX tzset() function. See also format_time/3.\n\ntoplevel_mode(atom, changeable): If backtracking (default), the toplevel backtracks after completing a query. If recursive, the toplevel is implemented as a recursive loop. This implies that global variables set using b_setval/2 are maintained between queries. In recursive mode, answers to toplevel variables (see section 2.8) are kept in backtrackable global variables and thus not copied. In backtracking mode answers to toplevel variables are kept in the recorded database (see section 4.14.2). The recursive mode has been added for interactive usage of CHR (see section 8),20Suggested by Falco Nogatz which maintains the global constraint store in backtrackable global variables.\n\ntoplevel_print_anon(bool, changeable): If true, top-level variables starting with an underscore (_) are printed normally. If false they are hidden. This may be used to hide bindings in complex queries from the top level.\n\ntoplevel_print_factorized(bool, changeable): If true (default false) show the internal sharing of subterms in the answer substitution. The example below reveals internal sharing of leaf nodes in red-black trees as implemented by the library(rbtrees) predicate rb_new1: \n\n?- set_prolog_flag(toplevel_print_factorized, true).\n?- rb_new(X).\nX = t(_S1, _S1), % where\n _S1 = black('', _G387, _G388, '').\n\n If this flag is false, the % where notation is still used to indicate cycles as illustrated below. This example also shows that the implementation reveals the internal cycle length, and not the minimal cycle length. Cycles of different length are indistinguishable in Prolog (as illustrated by S == R). \n\n?- S = s(S), R = s(s(R)), S == R.\nS = s(S),\nR = s(s(R)).\n\n \n\nanswer_write_options(term, changeable): This argument is given as option-list to write_term/2 for printing results of queries. Default is [quoted(true), portray(true), max_depth(10), attributes(portray)].\n\ntoplevel_prompt(atom, changeable): Define the prompt that is used by the interactive top level. The following ~ (tilde) sequences are replaced: ~mType in module if not user (see module/1) ~lBreak level if not 0 (see break/0) ~dDebugging state if not normal execution (see debug/0, trace/0) ~!History event if history is enabled (see flag history) \n\ntoplevel_var_size(int, changeable): Maximum size counted in literals of a term returned as a binding for a variable in a top-level query that is saved for re-use using the $ variable reference. See section 2.8.\n\ntrace_gc(bool, changeable): If true (default false), garbage collections and stack-shifts will be reported on the terminal. May be changed. Values are reported in bytes as G+T, where G is the global stack value and T the trail stack value. `Gained' describes the number of bytes reclaimed. `used' the number of bytes on the stack after GC and `free' the number of bytes allocated, but not in use. Below is an example output. \n\n% GC: gained 236,416+163,424 in 0.00 sec;\n used 13,448+5,808; free 72,568+47,440\n\n \n\ntraditional(bool): Available in SWI-Prolog version 7. If true, `traditional' mode has been selected using --traditional. Notice that some SWI7 features, like the functional notation on dicts, do not work in this mode. See also section 5.\n\ntty_control(bool, changeable): Determines whether the terminal is switched to raw mode for get_single_char/1, which also reads the user actions for the trace. May be set. If this flag is false at startup, command line editing is disabled. See also the +/-tty command line option.\n\nunix(bool): If present and true, the operating system is some version of Unix. Defined if the C compiler used to compile this version of SWI-Prolog either defines __unix__ or unix. On other systems this flag is not available. See also apple and windows.\n\nunknown(fail,warning,error, changeable): Determines the behaviour if an undefined procedure is encountered. If fail, the predicate fails silently. If warn, a warning is printed, and execution continues as if the predicate was not defined, and if error (default), an existence_error exception is raised. This flag is local to each module and inherited from the module's import-module. Using default setup, this implies that normal modules inherit the flag from user, which in turn inherit the value error from system. The user may change the flag for module user to change the default for all application modules or for a specific module. It is strongly advised to keep the error default and use dynamic/1 and/or multifile/1 to specify possible non-existence of a predicate.\n\nunload_foreign_libraries(bool, changeable): If true (default false), unload all loaded foreign libraries. Default is false because modern OSes reclaim the resources anyway and unloading the foreign code may cause registered hooks to point to no longer existing data or code.\n\nuser_flags(Atom, changeable): Define the behaviour of set_prolog_flag/2 if the flag is not known. Values are silent, warning and error. The first two create the flag on-the-fly, where warning prints a message. The value error is consistent with ISO: it raises an existence error and does not create the flag. See also create_prolog_flag/3. The default is silent, but future versions may change that. Developers are encouraged to use another value and ensure proper use of create_prolog_flag/3 to create flags for their library.\n\nvar_prefix(bool, changeable): If true (default false), variables must start with an underscore (_). May be changed. This flag is local to the module in which it is changed. See section 2.15.1.7.\n\nverbose(atom, changeable): This flag is used by print_message/2. If its value is silent, messages of type informational and banner are suppressed. The -q switches the value from the initial normal to silent.\n\nverbose_autoload(bool, changeable): If true the normal consult message will be printed if a library is autoloaded. By default this message is suppressed. Intended to be used for debugging purposes.\n\nverbose_load(atom, changeable): Determines messages printed for loading (compiling) Prolog files. Current values are full (print a message at the start and end of each file loaded), normal (print a message at the end of each file loaded), brief (print a message at end of loading the toplevel file), and silent (no messages are printed, default). The value of this flag is normally controlled by the option silent(Bool) provided by load_files/2.\n\nverbose_file_search(bool, changeable): If true (default false), print messages indicating the progress of absolute_file_name/[2,3] in locating files. Intended for debugging complicated file-search paths. See also file_search_path/2.\n\nversion(integer): The version identifier is an integer with value: 10000 × Major + 100 × Minor + Patch\n\nversion_data(swi(Major, Minor, Patch, Extra)): Part of the dialect compatibility layer; see also the Prolog flag dialect and section C. Extra provides platform-specific version information as a list. Extra is used for tagged versions such as ``7.4.0-rc1'', in which case Extra contains a term tag(rc1).\n\nversion_git(atom): Available if created from a git repository. See git-describe for details.\n\nwarn_override_implicit_import(bool, changeable): If true (default), a warning is printed if an implicitly imported predicate is clobbered by a local definition. See use_module/1 for details.\n\nwin_file_access_check(atom, changeable): Controls the behaviour or access_file/2 under Windows. There is no reliable way to check access to files and directories on Windows. This flag allows for switching between three alternative approximations. accessUse Windows _waccess() function. This ignores ACLs (Access Control List) and thus may indicate that access is allowed while it is not.filesecurityUse the Windows GetFileSecurity() function. This does not work on all file systems, but is probably the best choice on file systems that do support it, notably local NTFS volumes.opencloseTry to open the file and close it. This works reliable for files, but not for directories. Currently directories are checked using _waccess(). This is the default. \n\nwindows(bool): If present and true, the operating system is an implementation of Microsoft Windows. This flag is only available on MS-Windows based versions. See also unix.\n\nwrite_attributes(atom, changeable): Defines how write/1 and friends write attributed variables. The option values are described with the attributes option of write_term/3. Default is ignore.\n\nwrite_help_with_overstrike(bool): Internal flag used by help/1 when writing to a terminal. If present and true it prints bold and underlined text using overstrike.\n\nxpce(bool): Available and set to true if the XPCE graphics system is loaded.\n\nxpce_version(atom): Available and set to the version of the loaded XPCE system.\n\nxref(bool, changeable): If true, source code is being read for analysis purposes such as cross-referencing. Otherwise (default) it is being read to be compiled. This flag is used at several places by term_expansion/2 and goal_expansion/2 hooks, notably if these hooks use side effects. See also the libraries library(prolog_source) and library(prolog_xref).\n\n ", "prefix":"current_prolog_flag" }, "current_signal/3": { "body":"current_signal(${1:Name}, ${2:Id}, ${3:Handler})$4\n$0", "description":"current_signal(?Name, ?Id, ?Handler).\nEnumerate the currently defined signal handling. Name is the signal name, Id is the numerical identifier and Handler is the currently defined handler (see on_signal/3).", "prefix":"current_signal" }, "current_stream/3": { "body":"current_stream(${1:Object}, ${2:Mode}, ${3:Stream})$4\n$0", "description":"current_stream(?Object, ?Mode, ?Stream).\nThe predicate current_stream/3 is used to access the status of a stream as well as to generate all open streams. Object is the name of the file opened if the stream refers to an open file, an integer file descriptor if the stream encapsulates an operating system stream, or the atom [] if the stream refers to some other object. Mode is one of read or write.", "prefix":"current_stream" }, "current_trie/1": { "body":"current_trie(${1:Trie})$2\n$0", "description":"[nondet]current_trie(-Trie).\nTrue if Trie is a currently existing trie. As this enumerates and then filters all known atoms this predicate is slow and should only be used for debugging purposes.", "prefix":"current_trie" }, "cyclic_term/1": { "body":"cyclic_term(${1:Term})$2\n$0", "description":"cyclic_term(@Term).\nTrue if Term contains cycles, i.e. is an infinite term. See also acyclic_term/1 and section 2.16.53The predicates cyclic_term/1 and acyclic_term/1 are compatible with SICStus Prolog. Some Prolog systems supporting cyclic terms use is_cyclic/1 .", "prefix":"cyclic_term" }, "date:date_time_value/3": { "body": ["date_time_value(${1:Field}, ${2:Struct}, ${3:Value})$4\n$0" ], "description":" date_time_value(?Field:atom, +Struct:datime, -Value) is nondet.\n\n Extract values from a date-time structure. Provided fields are\n\n | year | integer | |\n | month | 1..12 | |\n | day | 1..31 | |\n | hour | 0..23 | |\n | minute | 0..59 | |\n | second | 0.0..60.0 | |\n | utc_offset | integer | Offset to UTC in seconds (positive is west) |\n | daylight_saving | bool | Name of timezone; fails if unknown |\n | date | date(Y,M,D) | |\n | time | time(H,M,S) | |", "prefix":"date_time_value" }, "date:day_of_the_week/2": { "body": ["day_of_the_week(${1:Date}, ${2:DayOfTheWeek})$3\n$0" ], "description":" day_of_the_week(+Date, -DayOfTheWeek) is det.\n\n Computes the day of the week for a given date. Days of the week\n are numbered from one to seven: monday = 1, tuesday = 2, ...,\n sunday = 7.\n\n @param Date is a term of the form date(+Year, +Month, +Day)", "prefix":"day_of_the_week" }, "date:day_of_the_year/2": { "body": ["day_of_the_year(${1:Date}, ${2:DayOfTheYear})$3\n$0" ], "description":" day_of_the_year(+Date, -DayOfTheYear) is det.\n\n Computes the day of the year for a given date. Days of the year\n are numbered from 1 to 365 (366 for a leap year).\n\n @param Date is a term of the form date(+Year, +Month, +Day)", "prefix":"day_of_the_year" }, "date:parse_time/2": { "body": ["parse_time(${1:Text}, ${2:Stamp})$3\n$0" ], "description":" parse_time(+Text, -Stamp) is semidet.\n parse_time(+Text, ?Format, -Stamp) is semidet.\n\n Stamp is a timestamp created from parsing Text using the\n representation Format. Currently supported formats are:\n\n * rfc_1123\n Used for the HTTP protocol to represent time-stamps\n * iso_8601\n Commonly used in XML documents.", "prefix":"parse_time" }, "date:parse_time/3": { "body": ["parse_time(${1:Text}, ${2:Format}, ${3:Stamp})$4\n$0" ], "description":" parse_time(+Text, -Stamp) is semidet.\n parse_time(+Text, ?Format, -Stamp) is semidet.\n\n Stamp is a timestamp created from parsing Text using the\n representation Format. Currently supported formats are:\n\n * rfc_1123\n Used for the HTTP protocol to represent time-stamps\n * iso_8601\n Commonly used in XML documents.", "prefix":"parse_time" }, "date_time_stamp/2": { "body":"date_time_stamp(${1:DateTime}, ${2:TimeStamp})$3\n$0", "description":"date_time_stamp(+DateTime, -TimeStamp).\nCompute the timestamp from a date/9 term. Values for month, day, hour, minute or second need not be normalized. This flexibility allows for easy computation of the time at any given number of these units from a given timestamp. Normalization can be achieved following this call with stamp_date_time/3. This example computes the date 200 days after 2006-7-14: \n\n?- date_time_stamp(date(2006,7,214,0,0,0,0,-,-), Stamp),\n stamp_date_time(Stamp, D, 0),\n date_time_value(date, D, Date).\nDate = date(2007, 1, 30)\n\n When computing a time stamp from a local time specification, the UTC offset (arg 7), TZ (arg 8) and DST (arg 9) argument may be left unbound and are unified with the proper information. The example below, executed in Amsterdam, illustrates this behaviour. On the 25th of March at 01:00, DST does not apply. At 02.00, the clock is advanced by one hour and thus both 02:00 and 03:00 represent the same time stamp. \n\n\n\n1 ?- date_time_stamp(date(2012,3,25,1,0,0,UTCOff,TZ,DST),\n Stamp).\nUTCOff = -3600,\nTZ = 'CET',\nDST = false,\nStamp = 1332633600.0.\n\n2 ?- date_time_stamp(date(2012,3,25,2,0,0,UTCOff,TZ,DST),\n Stamp).\nUTCOff = -7200,\nTZ = 'CEST',\nDST = true,\nStamp = 1332637200.0.\n\n3 ?- date_time_stamp(date(2012,3,25,3,0,0,UTCOff,TZ,DST),\n Stamp).\nUTCOff = -7200,\nTZ = 'CEST',\nDST = true,\nStamp = 1332637200.0.\n\n Note that DST and offset calculation are based on the POSIX function mktime(). If mktime() returns an error, a representation_error dst is generated.\n\n", "prefix":"date_time_stamp" }, "date_time_value/3": { "body":"date_time_value(${1:Key}, ${2:DateTime}, ${3:Value})$4\n$0", "description":"date_time_value(?Key, +DateTime, ?Value).\nExtract values from a date/9 term. Provided keys are: \n\nkeyvalue year Calendar year as an integer month Calendar month as an integer 1..12 day Calendar day as an integer 1..31 hour Clock hour as an integer 0..23 minute Clock minute as an integer 0..59 second Clock second as a float 0.0..60.0 utc_offset Offset to UTC in seconds (positive is west) time_zone Name of timezone; fails if unknown daylight_saving Bool daylight_savingtrue) if dst is in effect date Term date(Y,M,D) time Term time(H,M,S) ", "prefix":"date_time_value" }, "day_of_the_week/2": { "body":"day_of_the_week(${1:Date}, ${2:DayOfTheWeek})$3\n$0", "description":"day_of_the_week(+Date,-DayOfTheWeek).\nComputes the day of the week for a given date. Date = date(Year,Month,Day). Days of the week are numbered from one to seven: Monday = 1, Tuesday = 2, ... , Sunday = 7.", "prefix":"day_of_the_week" }, "dcg/dcg_basics:alpha_to_lower/3": { "body": ["alpha_to_lower(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"alpha_to_lower('Param1','Param2','Param3')", "prefix":"alpha_to_lower" }, "dcg/dcg_basics:atom/3": { "body": ["atom(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"atom('Param1','Param2','Param3')", "prefix":"atom" }, "dcg/dcg_basics:blank/2": { "body": ["blank(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"blank('Param1','Param2')", "prefix":"blank" }, "dcg/dcg_basics:blanks/2": { "body": ["blanks(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"blanks('Param1','Param2')", "prefix":"blanks" }, "dcg/dcg_basics:blanks_to_nl/2": { "body": ["blanks_to_nl(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"blanks_to_nl('Param1','Param2')", "prefix":"blanks_to_nl" }, "dcg/dcg_basics:digit/3": { "body": ["digit(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"digit('Param1','Param2','Param3')", "prefix":"digit" }, "dcg/dcg_basics:digits/3": { "body": ["digits(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"digits('Param1','Param2','Param3')", "prefix":"digits" }, "dcg/dcg_basics:eos/2": { "body": ["eos(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"eos('Param1','Param2')", "prefix":"eos" }, "dcg/dcg_basics:float/3": { "body": ["float(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"float('Param1','Param2','Param3')", "prefix":"float" }, "dcg/dcg_basics:integer/3": { "body": ["integer(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"integer('Param1','Param2','Param3')", "prefix":"integer" }, "dcg/dcg_basics:nonblank/3": { "body": ["nonblank(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"nonblank('Param1','Param2','Param3')", "prefix":"nonblank" }, "dcg/dcg_basics:nonblanks/3": { "body": ["nonblanks(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"nonblanks('Param1','Param2','Param3')", "prefix":"nonblanks" }, "dcg/dcg_basics:number/3": { "body": ["number(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"number('Param1','Param2','Param3')", "prefix":"number" }, "dcg/dcg_basics:prolog_var_name/3": { "body": [ "prolog_var_name(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"prolog_var_name('Param1','Param2','Param3')", "prefix":"prolog_var_name" }, "dcg/dcg_basics:remainder/3": { "body": ["remainder(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"remainder('Param1','Param2','Param3')", "prefix":"remainder" }, "dcg/dcg_basics:string/3": { "body": ["string(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"string('Param1','Param2','Param3')", "prefix":"string" }, "dcg/dcg_basics:string_without/4": { "body": [ "string_without(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"string_without('Param1','Param2','Param3','Param4')", "prefix":"string_without" }, "dcg/dcg_basics:white/2": { "body": ["white(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"white('Param1','Param2')", "prefix":"white" }, "dcg/dcg_basics:whites/2": { "body": ["whites(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"whites('Param1','Param2')", "prefix":"whites" }, "dcg/dcg_basics:xdigit/3": { "body": ["xdigit(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"xdigit('Param1','Param2','Param3')", "prefix":"xdigit" }, "dcg/dcg_basics:xdigits/3": { "body": ["xdigits(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"xdigits('Param1','Param2','Param3')", "prefix":"xdigits" }, "dcg/dcg_basics:xinteger/3": { "body": ["xinteger(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"xinteger('Param1','Param2','Param3')", "prefix":"xinteger" }, "dcg_translate_rule/2": { "body":"dcg_translate_rule(${1:In}, ${2:Out})$3\n$0", "description":"dcg_translate_rule(+In, -Out).\nThis predicate performs the translation of a term Head-->Body into a normal Prolog clause. Normally this functionality should be accessed using expand_term/2.", "prefix":"dcg_translate_rule" }, "dcg_translate_rule/4": { "body":"dcg_translate_rule(${1:In}, ${2:LayoutIn}, ${3:Out}, ${4:LayoutOut})$5\n$0", "description":"dcg_translate_rule(+In, ?LayoutIn, -Out, -LayoutOut).\nThese versions are called before their 2-argument counterparts. The input layout term is either a variable (if no layout information is available) or a term carrying detailed layout information as returned by the subterm_positions of read_term/2.", "prefix":"dcg_translate_rule" }, "dde_current_connection/2": { "body":"dde_current_connection(${1:Service}, ${2:Topic})$3\n$0", "description":"dde_current_connection(-Service, -Topic).\nFind currently open conversations.", "prefix":"dde_current_connection" }, "dde_current_service/2": { "body":"dde_current_service(${1:Service}, ${2:Topic})$3\n$0", "description":"dde_current_service(-Service, -Topic).\nFind currently registered services and the topics served on them.", "prefix":"dde_current_service" }, "dde_execute/2": { "body":"dde_execute(${1:Handle}, ${2:Command})$3\n$0", "description":"dde_execute(+Handle, +Command).\nRequest the DDE server to execute the given command string. Succeeds if the command could be executed and fails with an error message otherwise.", "prefix":"dde_execute" }, "dde_poke/3": { "body":"dde_poke(${1:Handle}, ${2:Item}, ${3:Command})$4\n$0", "description":"dde_poke(+Handle, +Item, +Command).\nIssue a POKE command to the server on the specified Item. command is passed as data of type CF_TEXT.", "prefix":"dde_poke" }, "dde_register_service/2": { "body":"dde_register_service(${1:Template}, ${2:Goal})$3\n$0", "description":"dde_register_service(+Template, +Goal).\nRegister a server to handle DDE request or DDE execute requests from other applications. To register a service for a DDE request, Template is of the form: +Service(+Topic, +Item, +Value) Service is the name of the DDE service provided (like progman in the client example above). Topic is either an atom, indicating Goal only handles requests on this topic, or a variable that also appears in Goal. Item and Value are variables that also appear in Goal. Item represents the request data as a Prolog atom.135Up to version 3.4.5 this was a list of character codes. As recent versions have atom garbage collection there is no need for this anymore. The example below registers the Prolog current_prolog_flag/2 predicate to be accessible from other applications. The request may be given from the same Prolog as well as from another application. \n\n\n\n?- dde_register_service(prolog(current_prolog_flag, F, V),\n current_prolog_flag(F, V)).\n\n?- open_dde_conversation(prolog, current_prolog_flag, Handle),\n dde_request(Handle, home, Home),\n close_dde_conversation(Handle).\n\nHome = '/usr/local/lib/pl-2.0.6/'\n\n Handling DDE execute requests is very similar. In this case the template is of the form:\n\n +Service(+Topic, +Item) Passing a Value argument is not needed as execute requests either succeed or fail. If Goal fails, a `not processed' is passed back to the caller of the DDE request.\n\n", "prefix":"dde_register_service" }, "dde_request/3": { "body":"dde_request(${1:Handle}, ${2:Item}, ${3:Value})$4\n$0", "description":"dde_request(+Handle, +Item, -Value).\nRequest a value from the server. Item is an atom that identifies the requested data, and Value will be a string (CF_TEXT data in DDE parlance) representing that data, if the request is successful.", "prefix":"dde_request" }, "dde_unregister_service/1": { "body":"dde_unregister_service(${1:Service})$2\n$0", "description":"dde_unregister_service(+Service).\nStop responding to Service. If Prolog is halted, it will automatically call this on all open services.", "prefix":"dde_unregister_service" }, "debug/0": { "body":"debug$1\n$0", "description":"debug.\nStart debugger. In debug mode, Prolog stops at spy and trace points, disables last-call optimisation and aggressive destruction of choice points to make debugging information accessible. Implemented by the Prolog flag debug. Note that the min_free parameter of all stacks is enlarged to 8 K cells if debugging is switched off in order to avoid excessive GC. GC complicates tracing because it renames the _G variables and replaces unreachable variables with the atom . Calling nodebug/0 does not reset the initial free-margin because several parts of the top level and debugger disable debugging of system code regions. See also set_prolog_stack/2.\n\n", "prefix":"debug" }, "debug:assertion/1": { "body":"assertion(${1:Goal})$2\n$0", "description":"[det]assertion(:Goal).\nActs similar to C assert() macro. It has no effect if Goal succeeds. If Goal fails or throws an exception, the following steps are taken: \n\ncall prolog:assertion_failed/2. If prolog:assertion_failed/2 fails, then: If this is an interactive toplevel thread, print a message, the stack-trace, and finally trap the debugger.Otherwise, throw error(assertion_error(Reason, G),_) where Reason is one of fail or the exception raised.\n\n", "prefix":"assertion" }, "debug:debug/1": { "body":"debug(${1:Topic})$2\n$0", "description":"[det]debug(+Topic).\n", "prefix":"debug" }, "debug:debug/3": { "body":"debug(${1:Topic}, ${2:Format}, ${3:Args})$4\n$0", "description":"[det]debug(+Topic, +Format, :Args).\nFormat a message if debug topic is enabled. Similar to format/3 to user_error, but only prints if Topic is activated through debug/1. Args is a meta-argument to deal with goal for the @-command. Output is first handed to the hook prolog:debug_print_hook/3. If this fails, Format+Args is translated to text using the message-translation (see print_message/2) for the term debug(Format, Args) and then printed to every matching destination (controlled by debug/1) using print_message_lines/3. The message is preceded by '% ' and terminated with a newline. \n\nSee also: format/3.\n\n ", "prefix":"debug" }, "debug:debug_message_context/1": { "body":"debug_message_context(${1:What})$2\n$0", "description":"[det]debug_message_context(+What).\nSpecify additional context for debug messages. What is one of +Context or -Context, and Context is one of thread, time or time(Format), where Format is a format specification for format_time/3 (default is %T.%3f). Initially, debug/3 shows only thread information.", "prefix":"debug_message_context" }, "debug:debugging/1": { "body":"debugging(${1:Topic})$2\n$0", "description":"[nondet]debugging(-Topic).\n", "prefix":"debugging" }, "debug:debugging/2": { "body":"debugging(${1:Topic}, ${2:Bool})$3\n$0", "description":"[nondet]debugging(?Topic, ?Bool).\nExamine debug topics. The form debugging(+Topic) may be used to perform more complex debugging tasks. A typical usage skeleton is: \n\n ( debugging(mytopic)\n -> \n ; true\n ),\n ...\n\n The other two calls are intended to examine existing and enabled debugging tokens and are typically not used in user programs.\n\n", "prefix":"debugging" }, "debug:nodebug/1": { "body":"nodebug(${1:Topic})$2\n$0", "description":"[det]nodebug(+Topic).\nAdd/remove a topic from being printed. nodebug(_) removes all topics. Gives a warning if the topic is not defined unless it is used from a directive. The latter allows placing debug topics at the start of a (load-)file without warnings. For debug/1, Topic can be a term Topic > Out, where Out is either a stream or stream-alias or a filename (atom). This redirects debug information on this topic to the given output.\n\n", "prefix":"nodebug" }, "debugging/0": { "body":"debugging$1\n$0", "description":"debugging.\nPrint debug status and spy points on current output stream. See also the Prolog flag debug.", "prefix":"debugging" }, "default_module/2": { "body":"default_module(${1:Module}, ${2:Default})$3\n$0", "description":"[multi]default_module(+Module, -Default).\nTrue if predicates and operators in Default are visible in Module. Modules are returned in the same search order used for predicates and operators. That is, Default is first unified with Module, followed by the depth-first transitive closure of import_module/2.", "prefix":"default_module" }, "del_attr/2": { "body":"del_attr(${1:Var}, ${2:Module})$3\n$0", "description":"del_attr(+Var, +Module).\nDelete the named attribute. If Var loses its last attribute it is transformed back into a traditional Prolog variable. If Module is not an atom, a type error is raised. In all other cases this predicate succeeds regardless of whether or not the named attribute is present.", "prefix":"del_attr" }, "del_attrs/1": { "body":"del_attrs(${1:Var})$2\n$0", "description":"del_attrs(+Var).\nIf Var is an attributed variable, delete all its attributes. In all other cases, this predicate succeeds without side-effects.", "prefix":"del_attrs" }, "del_dict/4": { "body":"del_dict(${1:Key}, ${2:DictIn}, ${3:Value}, ${4:DictOut})$5\n$0", "description":"del_dict(+Key, +DictIn, ?Value, -DictOut).\nTrue when Key-Value is in DictIn and DictOut contains all associations of DictIn except for Key.", "prefix":"del_dict" }, "delete_directory/1": { "body":"delete_directory(${1:Directory})$2\n$0", "description":"delete_directory(+Directory).\nDelete directory (folder) from the filesystem. Raises an exception on failure. Please note that in general it will not be possible to delete a non-empty directory.", "prefix":"delete_directory" }, "delete_file/1": { "body":"delete_file(${1:File})$2\n$0", "description":"delete_file(+File).\nRemove File from the file system.", "prefix":"delete_file" }, "delete_import_module/2": { "body":"delete_import_module(${1:Module}, ${2:Import})$3\n$0", "description":"delete_import_module(+Module, +Import).\nDelete Import from the list of import modules for Module. Fails silently if Import is not in the list.", "prefix":"delete_import_module" }, "deterministic/1": { "body":"deterministic(${1:Boolean})$2\n$0", "description":"deterministic(-Boolean).\nUnifies its argument with true if no choice point exists that is more recent than the entry of the clause in which it appears. There are few realistic situations for using this predicate. It is used by the prolog/0 top level to check whether Prolog should prompt the user for alternatives. Similar results can be achieved in a more portable fashion using call_cleanup/2.", "prefix":"deterministic" }, "dia_main:dialog/0": {"body": ["dialog$1\n$0" ], "description":"dialog", "prefix":"dialog"}, "dialect/bim:atomconcat/3": { "body": ["atomconcat(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"atomconcat('Param1','Param2','Param3')", "prefix":"atomconcat" }, "dialect/bim:bim_erase/1": { "body": ["bim_erase(${1:'Param1'})$2\n$0" ], "description":"bim_erase('Param1')", "prefix":"bim_erase" }, "dialect/bim:bim_erase/2": { "body": ["bim_erase(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"bim_erase('Param1','Param2')", "prefix":"bim_erase" }, "dialect/bim:bim_random/1": { "body": ["bim_random(${1:'Param1'})$2\n$0" ], "description":"bim_random('Param1')", "prefix":"bim_random" }, "dialect/bim:bim_recorded/3": { "body": ["bim_recorded(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"bim_recorded('Param1','Param2','Param3')", "prefix":"bim_recorded" }, "dialect/bim:bindVariables/1": { "body": ["bindVariables(${1:'Param1'})$2\n$0" ], "description":"bindVariables('Param1')", "prefix":"bindVariables" }, "dialect/bim:cputime/1": { "body": ["cputime(${1:'Param1'})$2\n$0" ], "description":"cputime('Param1')", "prefix":"cputime" }, "dialect/bim:erase_all/1": { "body": ["erase_all(${1:'Param1'})$2\n$0" ], "description":"erase_all('Param1')", "prefix":"erase_all" }, "dialect/bim:index/2": { "body": ["index(${1:PI}, ${2:Indices})$3\n$0" ], "description":"\tindex(+PI, +Indices) is det.\n\n\tIndex in the given arguments. SWI-Prolog performs JIT indexing.", "prefix":"index" }, "dialect/bim:inttoatom/2": { "body": ["inttoatom(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"inttoatom('Param1','Param2')", "prefix":"inttoatom" }, "dialect/bim:please/2": { "body": ["please(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"please('Param1','Param2')", "prefix":"please" }, "dialect/bim:predicate_type/2": { "body": ["predicate_type(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"predicate_type('Param1','Param2')", "prefix":"predicate_type" }, "dialect/bim:printf/2": { "body": ["printf(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"printf('Param1','Param2')", "prefix":"printf" }, "dialect/bim:record/3": { "body": ["record(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"record('Param1','Param2','Param3')", "prefix":"record" }, "dialect/bim:rerecord/2": { "body": ["rerecord(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"rerecord('Param1','Param2')", "prefix":"rerecord" }, "dialect/bim:setdebug/0": { "body": ["setdebug$1\n$0" ], "description":"setdebug", "prefix":"setdebug" }, "dialect/bim:update/1": { "body": ["update(${1:'Param1'})$2\n$0" ], "description":"update('Param1')", "prefix":"update" }, "dialect/bim:vread/2": { "body": ["vread(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"vread('Param1','Param2')", "prefix":"vread" }, "dialect/bim:writeClause/2": { "body": ["writeClause(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"writeClause('Param1','Param2')", "prefix":"writeClause" }, "dialect/commons:feature/1": { "body": ["feature(${1:Feature})$2\n$0" ], "description":"\tfeature(+Feature) is semidet.\n\n\tProvide the condition for :- if(feature(...)).", "prefix":"feature" }, "dialect/eclipse/test_util_iso:test/1": { "body": ["test(${1:TestFile})$2\n$0" ], "description":"\ttest(+TestFile) is det.\n\n\tRuns all the test patterns in TestFile.", "prefix":"test" }, "dialect/eclipse/test_util_iso:test/2": { "body": ["test(${1:TestFile}, ${2:ResultFile})$3\n$0" ], "description":"\ttest(+TestFile, +ResultFile) is det.\n\n\tRuns all the test patterns in TestFile, and logs results in\n\tResultFile.", "prefix":"test" }, "dialect/hprolog/format:format/2": { "body": ["format(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"format('Param1','Param2')", "prefix":"format" }, "dialect/hprolog/format:format/3": { "body": ["format(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"format('Param1','Param2','Param3')", "prefix":"format" }, "dialect/hprolog:bounded_sublist/3": { "body": ["bounded_sublist(${1:Sub}, ${2:List}, ${3:Bound})$4\n$0" ], "description":"\tbounded_sublist(?Sub, +List, +Bound:integer)\n\n\tAs sublist/2, but Sub has at most Bound elements. E.g. the call\n\tbelow generates all 21 sublists of length =< 2 from the second\n\targument.\n\n\t==\n\t?- bounded_sublist(List, [a,b,c,d], 2).\n\tX = [] ;\n\tX = [a] ;\n\tX = [a, b] ;\n\tX = [a] ;\n\t...\n\t==", "prefix":"bounded_sublist" }, "dialect/hprolog:chr_delete/3": { "body": ["chr_delete(${1:List}, ${2:Element}, ${3:Rest})$4\n$0" ], "description":"\tchr_delete(+List, +Element, -Rest) is det.\n\n\tRest is a copy of List without elements matching Element using\n\t==.", "prefix":"chr_delete" }, "dialect/hprolog:drop/3": { "body": ["drop(${1:N}, ${2:List}, ${3:ListMinFirstN})$4\n$0" ], "description":"\tdrop(+N, +List, -ListMinFirstN) is semidet.\n\n\tDrop the first N elements from List and unify the remainder with\n\tLastElements.", "prefix":"drop" }, "dialect/hprolog:ds_to_list/2": { "body": ["ds_to_list(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"ds_to_list('Param1','Param2')", "prefix":"ds_to_list" }, "dialect/hprolog:empty_ds/1": { "body": ["empty_ds(${1:'Param1'})$2\n$0" ], "description":"empty_ds('Param1')", "prefix":"empty_ds" }, "dialect/hprolog:get_ds/3": { "body": ["get_ds(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"get_ds('Param1','Param2','Param3')", "prefix":"get_ds" }, "dialect/hprolog:get_store/2": { "body": ["get_store(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"get_store('Param1','Param2')", "prefix":"get_store" }, "dialect/hprolog:init_store/2": { "body": ["init_store(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"init_store('Param1','Param2')", "prefix":"init_store" }, "dialect/hprolog:intersect_eq/3": { "body": ["intersect_eq(${1:List1}, ${2:List2}, ${3:Intersection})$4\n$0" ], "description":"\tintersect_eq(+List1, +List2, -Intersection)\n\n\tDetermine the intersection of two lists without unifying values.", "prefix":"intersect_eq" }, "dialect/hprolog:list_difference_eq/3": { "body": ["list_difference_eq(${1:List}, ${2:Subtract}, ${3:Rest})$4\n$0" ], "description":"\tlist_difference_eq(+List, -Subtract, -Rest)\n\n\tDelete all elements of Subtract from List and unify the result\n\twith Rest. Element comparision is done using ==/2.", "prefix":"list_difference_eq" }, "dialect/hprolog:make_get_store_goal/3": { "body": [ "make_get_store_goal(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"make_get_store_goal('Param1','Param2','Param3')", "prefix":"make_get_store_goal" }, "dialect/hprolog:make_get_store_goal_no_error/3": { "body": [ "make_get_store_goal_no_error(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"make_get_store_goal_no_error('Param1','Param2','Param3')", "prefix":"make_get_store_goal_no_error" }, "dialect/hprolog:make_init_store_goal/3": { "body": [ "make_init_store_goal(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"make_init_store_goal('Param1','Param2','Param3')", "prefix":"make_init_store_goal" }, "dialect/hprolog:make_update_store_goal/3": { "body": [ "make_update_store_goal(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"make_update_store_goal('Param1','Param2','Param3')", "prefix":"make_update_store_goal" }, "dialect/hprolog:max_go_list/2": { "body": ["max_go_list(${1:List}, ${2:Max})$3\n$0" ], "description":"\tmax_go_list(+List, -Max)\n\n\tReturn the maximum of List in the standard order of terms.", "prefix":"max_go_list" }, "dialect/hprolog:memberchk_eq/2": { "body": ["memberchk_eq(${1:Val}, ${2:List})$3\n$0" ], "description":"\tmemberchk_eq(+Val, +List)\n\n\tDeterministic check of membership using == rather than\n\tunification.", "prefix":"memberchk_eq" }, "dialect/hprolog:or_list/2": { "body": ["or_list(${1:ListOfInts}, ${2:BitwiseOr})$3\n$0" ], "description":"\tor_list(+ListOfInts, -BitwiseOr)\n\n\tDo a bitwise disjuction over all integer members of ListOfInts.", "prefix":"or_list" }, "dialect/hprolog:put_ds/4": { "body": [ "put_ds(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"put_ds('Param1','Param2','Param3','Param4')", "prefix":"put_ds" }, "dialect/hprolog:split_at/4": { "body": ["split_at(${1:N}, ${2:List}, ${3:FirstN}, ${4:Rest})$5\n$0" ], "description":"\tsplit_at(+N, +List, +FirstN, -Rest) is semidet.\n\n\tCombines take/3 and drop/3.", "prefix":"split_at" }, "dialect/hprolog:sublist/2": { "body": ["sublist(${1:Sub}, ${2:List})$3\n$0" ], "description":"\tsublist(?Sub, +List) is nondet.\n\n\tTrue if all elements of Sub appear in List in the same order.", "prefix":"sublist" }, "dialect/hprolog:substitute_eq/4": { "body": [ "substitute_eq(${1:OldVal}, ${2:OldList}, ${3:NewVal}, ${4:NewList})$5\n$0" ], "description":"\tsubstitute_eq(+OldVal, +OldList, +NewVal, -NewList)\n\n\tSubstitute OldVal by NewVal in OldList and unify the result\n\twith NewList.", "prefix":"substitute_eq" }, "dialect/hprolog:take/3": { "body": ["take(${1:N}, ${2:List}, ${3:FirstElements})$4\n$0" ], "description":"\ttake(+N, +List, -FirstElements)\n\n\tTake the first N elements from List and unify this with\n\tFirstElements. The definition is based on the GNU-Prolog lists\n\tlibrary. Implementation by Jan Wielemaker.", "prefix":"take" }, "dialect/hprolog:time/3": { "body": ["time(${1:Goal}, ${2:CPU}, ${3:Wall})$4\n$0" ], "description":"\ttime(:Goal, -CPU, -Wall)\n\n\thProlog compatible predicate to for statistical purposes", "prefix":"time" }, "dialect/hprolog:update_store/2": { "body": ["update_store(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"update_store('Param1','Param2')", "prefix":"update_store" }, "dialect/ifprolog:asserta_with_names/2": { "body": ["asserta_with_names(${1:Clause}, ${2:VarNames})$3\n$0" ], "description":"\tasserta_with_names(@Clause, +VarNames) is det.\n\tassertz_with_names(@Clause, +VarNames) is det.\n\tclause_with_names(?Head, ?Body, -VarNames) is det.\n\tretract_with_names(?Clause, -VarNames) is det.\n\n\tPredicates that manage the database while keeping track of\n\tvariable names.", "prefix":"asserta_with_names" }, "dialect/ifprolog:assertz_with_names/2": { "body": ["assertz_with_names(${1:Clause}, ${2:VarNames})$3\n$0" ], "description":"\tasserta_with_names(@Clause, +VarNames) is det.\n\tassertz_with_names(@Clause, +VarNames) is det.\n\tclause_with_names(?Head, ?Body, -VarNames) is det.\n\tretract_with_names(?Clause, -VarNames) is det.\n\n\tPredicates that manage the database while keeping track of\n\tvariable names.", "prefix":"assertz_with_names" }, "dialect/ifprolog:assign_alias/2": { "body": ["assign_alias(${1:Alias}, ${2:Stream})$3\n$0" ], "description":"\tassign_alias(+Alias, @Stream) is det.\n", "prefix":"assign_alias" }, "dialect/ifprolog:atom_part/4": { "body": ["atom_part(${1:Atom}, ${2:Pos}, ${3:Len}, ${4:Sub})$5\n$0" ], "description":"\tatom_part(+Atom, +Pos, +Len, -Sub) is det.\n\n\tTrue when Sub is part of the atom [Pos,Pos+Len). Unifies Sub\n\twith '' if Pos or Len is out of range!?", "prefix":"atom_part" }, "dialect/ifprolog:atom_prefix/3": { "body": ["atom_prefix(${1:Atom}, ${2:Len}, ${3:Sub})$4\n$0" ], "description":"\tatom_prefix(+Atom, +Len, -Sub) is det.\n\n\tUnifies Sub with the atom formed by the first Len characters in\n\tatom.\n\n\t - If Len < 1, Sub is unified with the null atom ''.\n\t - If Len > length of Atom, Sub is unified with Atom.", "prefix":"atom_prefix" }, "dialect/ifprolog:atom_split/3": { "body": ["atom_split(${1:Atom}, ${2:Delimiter}, ${3:Subatoms})$4\n$0" ], "description":"\tatom_split( +Atom, +Delimiter, ?Subatoms )\n\n\tSplit Atom over Delimiter and unify the parts with Subatoms.", "prefix":"atom_split" }, "dialect/ifprolog:atom_suffix/3": { "body": ["atom_suffix(${1:Atom}, ${2:Len}, ${3:Sub})$4\n$0" ], "description":"\tatom_suffix(+Atom, +Len, -Sub) is det.\n\n\tUnifies Sub with the atom formed by the last Len characters in\n\tatom.\n\n\t - If Len < 1, Sub is unified with the null atom ''.\n\t - If Len > length of Atom, Sub is unified with Atom.", "prefix":"atom_suffix" }, "dialect/ifprolog:block/3": { "body": ["block(${1:Goal}, ${2:Tag}, ${3:Recovery})$4\n$0" ], "description":"\tblock(:Goal, +Tag, :Recovery).\n\texit_block(+Tag).\n\tcut_block(+Tag) is semidet.\n\n\tThe control construct block/3 runs Goal in a block labelled Tag.\n\tIf Goal calls exit_block/1 using a matching Tag, the execution\n\tof Goal is abandoned using exception handling and execution\n\tcontinues by running Recovery. Goal can call cut_block/1. If\n\tthere is a block with matching Tag, all choice points created\n\tsince the block was started are destroyed.\n\n\t@bug\tThe block control structure is implemented on top of\n\t\tcatch/3 and throw/1. If catch/3 is used inside Goal,\n\t\tthe user must ensure that either (1) the protected\n\t\tgoal does not call exit_block/1 or cut_block/1 or (2)\n\t\tthe _Catcher_ of the catch/3 call does *not* unify with\n\t\ta term block(_,_).", "prefix":"block" }, "dialect/ifprolog:calling_context/1": { "body": ["calling_context(${1:Context})$2\n$0" ], "description":"\tcalling_context(-Context)\n\n\tMapped to context_module/1.", "prefix":"calling_context" }, "dialect/ifprolog:clause_with_names/3": { "body": ["clause_with_names(${1:Head}, ${2:Body}, ${3:VarNames})$4\n$0" ], "description":"\tasserta_with_names(@Clause, +VarNames) is det.\n\tassertz_with_names(@Clause, +VarNames) is det.\n\tclause_with_names(?Head, ?Body, -VarNames) is det.\n\tretract_with_names(?Clause, -VarNames) is det.\n\n\tPredicates that manage the database while keeping track of\n\tvariable names.", "prefix":"clause_with_names" }, "dialect/ifprolog:context/2": { "body": ["context(${1:Goal}, ${2:Mapping})$3\n$0" ], "description":"\tcontext(:Goal, +Mapping)\n\n\tIF/Prolog context/2 construct. This is the true predicate. This\n\tis normally mapped by goal-expansion.\n\n\t@bug\tDoes not deal with IF/Prolog signal mapping", "prefix":"context" }, "dialect/ifprolog:current_default_module/1": { "body": ["current_default_module(${1:Module})$2\n$0" ], "description":"\tcurrent_default_module(-Module) is det.\n\n\tName of the toplevel typein module.", "prefix":"current_default_module" }, "dialect/ifprolog:current_error/1": { "body": ["current_error(${1:Stream})$2\n$0" ], "description":"\tcurrent_error(-Stream)\n\n\tDoesn't exist in SWI-Prolog, but =user_error= is always an alias\n\tto the current error stream.", "prefix":"current_error" }, "dialect/ifprolog:current_global/1": { "body": ["current_global(${1:Name})$2\n$0" ], "description":"\tcurrent_global(+Name) is semidet.\n\tget_global(+Name, ?Value) is det.\n\tset_global(+Name, ?Value) is det.\n\tunset_global(+Name) is det.\n\n\tIF/Prolog global variables, mapped to SWI-Prolog's nb_*\n\tpredicates.", "prefix":"current_global" }, "dialect/ifprolog:current_signal/2": { "body": ["current_signal(${1:Signal}, ${2:Mode})$3\n$0" ], "description":"\tcurrent_signal(?Signal, ?Mode) is nondet.\n\n\tTrue when Mode is the current mode for handling Signal. Modes\n\tare =on=, =off=, =default=, =ignore=. Signals are =abort=,\n\t=alarm=, =interrupt=, =pipe=, =quit=, =termination=, =user_1=\n\tand =user_2=.\n\n\t@tbd\tImplement", "prefix":"current_signal" }, "dialect/ifprolog:current_visible/2": { "body": ["current_visible(${1:Module}, ${2:PredicateIndicator})$3\n$0" ], "description":"\tcurrent_visible(@Module, @PredicateIndicator).\n\n\tFIXME check with documentation", "prefix":"current_visible" }, "dialect/ifprolog:cut_block/1": { "body": ["cut_block(${1:Tag})$2\n$0" ], "description":"\tblock(:Goal, +Tag, :Recovery).\n\texit_block(+Tag).\n\tcut_block(+Tag) is semidet.\n\n\tThe control construct block/3 runs Goal in a block labelled Tag.\n\tIf Goal calls exit_block/1 using a matching Tag, the execution\n\tof Goal is abandoned using exception handling and execution\n\tcontinues by running Recovery. Goal can call cut_block/1. If\n\tthere is a block with matching Tag, all choice points created\n\tsince the block was started are destroyed.\n\n\t@bug\tThe block control structure is implemented on top of\n\t\tcatch/3 and throw/1. If catch/3 is used inside Goal,\n\t\tthe user must ensure that either (1) the protected\n\t\tgoal does not call exit_block/1 or cut_block/1 or (2)\n\t\tthe _Catcher_ of the catch/3 call does *not* unify with\n\t\ta term block(_,_).", "prefix":"cut_block" }, "dialect/ifprolog:debug_config/3": { "body": ["debug_config(${1:Key}, ${2:Current}, ${3:Value})$4\n$0" ], "description":"\tdebug_config(+Key, -Current, +Value)\n\n\tIgnored. Prints a message.", "prefix":"debug_config" }, "dialect/ifprolog:debug_mode/3": { "body": ["debug_mode(${1:PI}, ${2:Old}, ${3:New})$4\n$0" ], "description":"\tdebug_mode(:PI, -Old, +New)\n\n\tOld is not unified. Only New == off is mapped to disable\n\tdebugging of a predicate.", "prefix":"debug_mode" }, "dialect/ifprolog:digit/1": { "body": ["digit(${1:A})$2\n$0" ], "description":"\tdigit(+A).\n\n\tIs the character A a digit [0-9]", "prefix":"digit" }, "dialect/ifprolog:exit_block/1": { "body": ["exit_block(${1:Tag})$2\n$0" ], "description":"\tblock(:Goal, +Tag, :Recovery).\n\texit_block(+Tag).\n\tcut_block(+Tag) is semidet.\n\n\tThe control construct block/3 runs Goal in a block labelled Tag.\n\tIf Goal calls exit_block/1 using a matching Tag, the execution\n\tof Goal is abandoned using exception handling and execution\n\tcontinues by running Recovery. Goal can call cut_block/1. If\n\tthere is a block with matching Tag, all choice points created\n\tsince the block was started are destroyed.\n\n\t@bug\tThe block control structure is implemented on top of\n\t\tcatch/3 and throw/1. If catch/3 is used inside Goal,\n\t\tthe user must ensure that either (1) the protected\n\t\tgoal does not call exit_block/1 or cut_block/1 or (2)\n\t\tthe _Catcher_ of the catch/3 call does *not* unify with\n\t\ta term block(_,_).", "prefix":"exit_block" }, "dialect/ifprolog:file_test/2": { "body": ["file_test(${1:File}, ${2:Mode})$3\n$0" ], "description":"\tfile_test(+File, +Mode)\n\n\tMapped to access_file/2 (which understand more modes). Note that\n\tthis predicate is defined in the module =system= to allow for\n\tdirect calling.", "prefix":"file_test" }, "dialect/ifprolog:filepos/2": { "body": ["filepos(${1:Stream}, ${2:Line})$3\n$0" ], "description":"\tfilepos(@Stream, -Line)\n\n\tfrom the IF/Prolog documentation The predicate filepos/2\n\tdetermines the current line position of the specified input\n\tstream and unifies the result with Line. The current line\n\tposition is the number of line processed + 1", "prefix":"filepos" }, "dialect/ifprolog:filepos/3": { "body": ["filepos(${1:Stream}, ${2:Line}, ${3:Column})$4\n$0" ], "description":"\tfilepos(@Stream, -Line, -Column)\n\n\tfrom the IF/Prolog documentation The predicate filepos/2\n\tdetermines the current line position of the specified input\n\tstream and unifies the result with Line. The current line\n\tposition is the number of line processed + 1", "prefix":"filepos" }, "dialect/ifprolog:float_format/2": { "body": ["float_format(${1:Old}, ${2:New})$3\n$0" ], "description":"\tfloat_format(-Old, +New)\n\n\tIgnored. Prints a message. Cannot be emulated. Printing floats\n\twith a specified precision can only be done using format/2.", "prefix":"float_format" }, "dialect/ifprolog:for/3": { "body": ["for(${1:Start}, ${2:Count}, ${3:End})$4\n$0" ], "description":"\tfor(+Start, ?Count, +End) is nondet.\n\n\tSimilar to between/3, but can count down if Start > End.", "prefix":"for" }, "dialect/ifprolog:get_global/2": { "body": ["get_global(${1:Name}, ${2:Value})$3\n$0" ], "description":"\tcurrent_global(+Name) is semidet.\n\tget_global(+Name, ?Value) is det.\n\tset_global(+Name, ?Value) is det.\n\tunset_global(+Name) is det.\n\n\tIF/Prolog global variables, mapped to SWI-Prolog's nb_*\n\tpredicates.", "prefix":"get_global" }, "dialect/ifprolog:get_until/3": { "body": ["get_until(${1:SearchChar}, ${2:Text}, ${3:EndChar})$4\n$0" ], "description":"\tget_until(+SearchChar, -Text, -EndChar) is det.\n\tget_until(@Stream, +SearchChar, -Text, -EndChar) is det.\n\n\tRead input from Stream until SearchChar. Unify EndChar with\n\teither SearchChar or the atom =end_of_file=.", "prefix":"get_until" }, "dialect/ifprolog:get_until/4": { "body": [ "get_until(${1:Stream}, ${2:SearchChar}, ${3:Text}, ${4:EndChar})$5\n$0" ], "description":"\tget_until(+SearchChar, -Text, -EndChar) is det.\n\tget_until(@Stream, +SearchChar, -Text, -EndChar) is det.\n\n\tRead input from Stream until SearchChar. Unify EndChar with\n\teither SearchChar or the atom =end_of_file=.", "prefix":"get_until" }, "dialect/ifprolog:getchar/3": { "body": ["getchar(${1:Atom}, ${2:Pos}, ${3:Char})$4\n$0" ], "description":"\tgetchar(+Atom, +Pos, -Char)\n\n\tUnifies Char with the Position-th character in Atom\n\tIf Pos < 1 or Pos > length of Atom, then fail.", "prefix":"getchar" }, "dialect/ifprolog:getcwd/1": { "body": ["getcwd(${1:Dir})$2\n$0" ], "description":"\tgetcwd(-Dir)\n\n\tThe predicate getcwd/1 unifies Dir with the full pathname of the\n\tcurrent working directory.", "prefix":"getcwd" }, "dialect/ifprolog:if_concat_atom/2": { "body": ["if_concat_atom(${1:List}, ${2:Atom})$3\n$0" ], "description":"\tif_concat_atom(+List, -Atom) is det.\n\n\tTrue when Atom is the concatenation of the lexical form of all\n\telements from List. Same as if_concat_atom/3 using '' as\n\tdelimiter.", "prefix":"if_concat_atom" }, "dialect/ifprolog:if_concat_atom/3": { "body": ["if_concat_atom(${1:List}, ${2:Delimiter}, ${3:Atom})$4\n$0" ], "description":"\tif_concat_atom(+List, +Delimiter, -Atom) is det.\n\n\tTrue when Atom is the concatenation of the lexical form of all\n\telements from List, using Delimiter to delimit the elements.\n\n\tThe behavior of this ifprolog predicate is different w.r.t.\n\tSWI-Prolog in two respect: it supports arbitrary terms in List\n\trather than only atomic and it does _not_ work in mode -,+,+.", "prefix":"if_concat_atom" }, "dialect/ifprolog:ifprolog_debug/1": { "body": ["ifprolog_debug(${1:Goal})$2\n$0" ], "description":"\tifprolog_debug(:Goal)\n\n\tMap IF/Prolog debug(Goal)@Module. This should run Goal in debug\n\tmode. We rarely needs this type of measures in SWI-Prolog.", "prefix":"ifprolog_debug" }, "dialect/ifprolog:index/3": { "body": ["index(${1:Atom}, ${2:String}, ${3:Position})$4\n$0" ], "description":"\tindex(+Atom, +String, -Position) is semidet.\n\n\tTrue when Position is the first occurrence of String in Atom.\n\tPosition is 1-based.", "prefix":"index" }, "dialect/ifprolog:letter/1": { "body": ["letter(${1:A})$2\n$0" ], "description":"\tletter(+A).\n\n\tIs the character A a letter [A-Za-z]", "prefix":"letter" }, "dialect/ifprolog:list_length/2": { "body": ["list_length(${1:List}, ${2:Length})$3\n$0" ], "description":"\tlist_length(+List, ?Length) is det.\n\n\tDeterministic version of length/2. Current implementation simply\n\tcalls length/2.", "prefix":"list_length" }, "dialect/ifprolog:load/1": { "body": ["load(${1:File})$2\n$0" ], "description":"\tload(File)\n\n\tMapped to consult. I think that the compatible version should\n\tonly load .qlf (compiled) code.", "prefix":"load" }, "dialect/ifprolog:localtime/9": { "body": [ "localtime(${1:Time}, ${2:Year}, ${3:Month}, ${4:Day}, ${5:DoW}, ${6:DoY}, ${7:Hour}, ${8:Min}, ${9:Sec})$10\n$0" ], "description":"\tlocaltime(+Time, ?Year, ?Month, ?Day, ?DoW, ?DoY, ?Hour, ?Min, ?Sec)\n\n\tBreak system time into its components. Deefines components:\n\n\t | Year | Year number | 4 digits |\n\t | Month | Month number | 1..12 |\n\t | Day\t | Day of month | 1..31 |\n\t | DoW\t | Day of week | 1..7 (Mon-Sun) |\n\t | DoY\t | Day in year | 1..366 |\n\t | Hour | Hours\t | 0..23 |\n\t | Min\t | Minutes\t | 0..59 |\n\t | Sec\t | Seconds\t | 0..59 |\n\n\tNote that in IF/Prolog V4, Year is 0..99, while it is a\n\tfour-digit number in IF/Prolog V5. We emulate IF/Prolog V5.", "prefix":"localtime" }, "dialect/ifprolog:lower_upper/2": { "body": ["lower_upper(${1:Lower}, ${2:Upper})$3\n$0" ], "description":"\tlower_upper(+Lower, -Upper) is det.\n\tlower_upper(-Lower, +Upper) is det.\n\n\tMulti-moded combination of upcase_atom/2 and downcase_atom/2.", "prefix":"lower_upper" }, "dialect/ifprolog:match/2": { "body": ["match(${1:Mask}, ${2:Atom})$3\n$0" ], "description":"\tmatch(+Mask, +Atom) is semidet.\n\n\tSame as once(match(Mask, Atom, _Replacements)).", "prefix":"match" }, "dialect/ifprolog:match/3": { "body": ["match(${1:Mask}, ${2:Atom}, ${3:Replacements})$4\n$0" ], "description":"\tmatch(+Mask, +Atom, ?Replacements) is nondet.\n\n\tPattern matching. This emulation should be complete. Can be\n\toptimized using caching of the pattern-analysis or doing the\n\tanalysis at compile-time.", "prefix":"match" }, "dialect/ifprolog:modify_mode/3": { "body": ["modify_mode(${1:PI}, ${2:OldMode}, ${3:NewMode})$4\n$0" ], "description":"\tmodify_mode(+PI, -OldMode, +NewMode) is det.\n\n\tSwitch between static and dynamic code. Fully supported, but\n\tnotably changing static to dynamic code is not allowed if the\n\tpredicate has clauses.", "prefix":"modify_mode" }, "dialect/ifprolog:parse_atom/6": { "body": [ "parse_atom(${1:Atom}, ${2:StartPos}, ${3:EndPos}, ${4:Term}, ${5:VarList}, ${6:Error})$7\n$0" ], "description":"\tparse_atom(+Atom, +StartPos, ?EndPos, ?Term, ?VarList, ?Error)\n\n\tRead from an atom.\n\n\t@param StartPos is 1-based position to start reading\n\t@param Error is the 1-based position of a syntax error or 0 if\n\t there is no error.", "prefix":"parse_atom" }, "dialect/ifprolog:predicate_type/2": { "body": ["predicate_type(${1:PI}, ${2:Type})$3\n$0" ], "description":"\tpredicate_type(:PI, -Type) is det.\n\n\tTrue when Type describes the type of PI. Note that the value\n\t=linear= seems to mean you can use clause/2 on it, which is true\n\tfor any SWI-Prolog predicate that is defined. Therefore, we use\n\tit for any predicate that is defined.", "prefix":"predicate_type" }, "dialect/ifprolog:program_parameters/1": { "body": ["program_parameters(${1:List})$2\n$0" ], "description":"\tprogram_parameters(-List:atom)\n\n\tAll command-line argument, including the executable,", "prefix":"program_parameters" }, "dialect/ifprolog:prolog_version/1": { "body": ["prolog_version(${1:Version})$2\n$0" ], "description":"\tprolog_version(-Version)\n\n\tReturn IF/Prolog simulated version string", "prefix":"prolog_version" }, "dialect/ifprolog:proroot/1": { "body": ["proroot(${1:Path})$2\n$0" ], "description":"\tproroot(-Path)\n\n\tTrue when Path is the installation location of the Prolog\n\tsystem.", "prefix":"proroot" }, "dialect/ifprolog:retract_with_names/2": { "body": ["retract_with_names(${1:Clause}, ${2:VarNames})$3\n$0" ], "description":"\tasserta_with_names(@Clause, +VarNames) is det.\n\tassertz_with_names(@Clause, +VarNames) is det.\n\tclause_with_names(?Head, ?Body, -VarNames) is det.\n\tretract_with_names(?Clause, -VarNames) is det.\n\n\tPredicates that manage the database while keeping track of\n\tvariable names.", "prefix":"retract_with_names" }, "dialect/ifprolog:set_default_module/1": { "body": ["set_default_module(${1:Module})$2\n$0" ], "description":"\tset_default_module(+Module) is det.\n\n\tSet the default toplevel module.", "prefix":"set_default_module" }, "dialect/ifprolog:set_global/2": { "body": ["set_global(${1:Name}, ${2:Value})$3\n$0" ], "description":"\tcurrent_global(+Name) is semidet.\n\tget_global(+Name, ?Value) is det.\n\tset_global(+Name, ?Value) is det.\n\tunset_global(+Name) is det.\n\n\tIF/Prolog global variables, mapped to SWI-Prolog's nb_*\n\tpredicates.", "prefix":"set_global" }, "dialect/ifprolog:system_name/1": { "body": ["system_name(${1:SystemName})$2\n$0" ], "description":"\tsystem_name(-SystemName)\n\n\tTrue when SystemName identifies the operating system. Note that\n\tthis returns the SWI-Prolog =arch= flag, and not the IF/Prolog\n\tidentifiers.", "prefix":"system_name" }, "dialect/ifprolog:unset_global/1": { "body": ["unset_global(${1:Name})$2\n$0" ], "description":"\tcurrent_global(+Name) is semidet.\n\tget_global(+Name, ?Value) is det.\n\tset_global(+Name, ?Value) is det.\n\tunset_global(+Name) is det.\n\n\tIF/Prolog global variables, mapped to SWI-Prolog's nb_*\n\tpredicates.", "prefix":"unset_global" }, "dialect/ifprolog:user_parameters/1": { "body": ["user_parameters(${1:List})$2\n$0" ], "description":"\tuser_parameters(-List:atom)\n\n\tParameters after =|--|=.", "prefix":"user_parameters" }, "dialect/ifprolog:write_atom/2": { "body": ["write_atom(${1:Term}, ${2:Atom})$3\n$0" ], "description":"\twrite_atom(+Term, -Atom)\n\n\tUse write/1 to write Term to Atom.", "prefix":"write_atom" }, "dialect/ifprolog:write_formatted/2": { "body": ["write_formatted(${1:Format}, ${2:ArgList})$3\n$0" ], "description":"\twrite_formatted_atom(-Atom, +Format, +ArgList) is det.\n\twrite_formatted(+Format, +ArgList) is det.\n\twrite_formatted(@Stream, +Format, +ArgList) is det.\n\n\tEmulation of IF/Prolog formatted write. The emulation is very\n\tincomplete. Notable asks for dealing with aligned fields, etc.\n\n\t@bug\tNot all format characters are processed\n\t@bug Incomplete processing of modifiers, fieldwidth and precision\n\t@tbd\tThis should become goal-expansion based to process\n\t\tformat specifiers at compile-time.", "prefix":"write_formatted" }, "dialect/ifprolog:write_formatted/3": { "body": ["write_formatted(${1:Atom}, ${2:Format}, ${3:ArgList})$4\n$0" ], "description":"\twrite_formatted_atom(-Atom, +Format, +ArgList) is det.\n\twrite_formatted(+Format, +ArgList) is det.\n\twrite_formatted(@Stream, +Format, +ArgList) is det.\n\n\tEmulation of IF/Prolog formatted write. The emulation is very\n\tincomplete. Notable asks for dealing with aligned fields, etc.\n\n\t@bug\tNot all format characters are processed\n\t@bug Incomplete processing of modifiers, fieldwidth and precision\n\t@tbd\tThis should become goal-expansion based to process\n\t\tformat specifiers at compile-time.", "prefix":"write_formatted" }, "dialect/ifprolog:write_formatted_atom/3": { "body": ["write_formatted_atom(${1:Atom}, ${2:Format}, ${3:ArgList})$4\n$0" ], "description":"\twrite_formatted_atom(-Atom, +Format, +ArgList) is det.\n\twrite_formatted(+Format, +ArgList) is det.\n\twrite_formatted(@Stream, +Format, +ArgList) is det.\n\n\tEmulation of IF/Prolog formatted write. The emulation is very\n\tincomplete. Notable asks for dealing with aligned fields, etc.\n\n\t@bug\tNot all format characters are processed\n\t@bug Incomplete processing of modifiers, fieldwidth and precision\n\t@tbd\tThis should become goal-expansion based to process\n\t\tformat specifiers at compile-time.", "prefix":"write_formatted_atom" }, "dialect/ifprolog:writeq_atom/2": { "body": ["writeq_atom(${1:Term}, ${2:Atom})$3\n$0" ], "description":"\twriteq_atom(+Term, -Atom)\n\n\tUse writeq/1 to write Term to Atom.", "prefix":"writeq_atom" }, "dialect/iso/iso_predicates:iso_builtin_function/1": { "body": ["iso_builtin_function(${1:Head})$2\n$0" ], "description":"\tiso_builtin_function(?Head:callable) is nondet.\n\n\tTrue if Head describes a builtin arithmetic function as defined\n\tby the ISO Prolog standard (ISO/IEC 1321 l-l).", "prefix":"iso_builtin_function" }, "dialect/iso/iso_predicates:iso_builtin_predicate/1": { "body": ["iso_builtin_predicate(${1:Head})$2\n$0" ], "description":"\tiso_builtin_predicate(?Head:callable) is nondet.\n\n\tTrue if Head describes a builtin defined by the ISO Prolog\n\tstandard (ISO/IEC 1321 l-l).", "prefix":"iso_builtin_predicate" }, "dialect/sicstus/arrays:aref/3": { "body": ["aref(${1:Index}, ${2:Array}, ${3:Element})$4\n$0" ], "description":"\taref(+Index, +Array, -Element) is semidet.\n\n\tTrue if Element is the current element in Array at Index.\n\n\t@compat SICStus-3", "prefix":"aref" }, "dialect/sicstus/arrays:arefa/3": { "body": ["arefa(${1:Index}, ${2:Array}, ${3:ArrayElement})$4\n$0" ], "description":"\tarefa(+Index, +Array, -ArrayElement) is det.\n\n\tAs aref/3, but succeeds with an empty array of the element is\n\tnot set.\n\n\t@compat SICStus-3", "prefix":"arefa" }, "dialect/sicstus/arrays:arefl/3": { "body": ["arefl(${1:Index}, ${2:Array}, ${3:ListElement})$4\n$0" ], "description":"\tarefl(+Index, +Array, -ListElement) is det.\n\n\tAs aref/3, but succeeds with an empty list of the element is not\n\tset.\n\n\t@compat SICStus-3", "prefix":"arefl" }, "dialect/sicstus/arrays:array_to_list/2": { "body": ["array_to_list(${1:Array}, ${2:Pairs})$3\n$0" ], "description":"\tarray_to_list(+Array, -Pairs) is det.\n\n\t@compat SICStus-3", "prefix":"array_to_list" }, "dialect/sicstus/arrays:aset/4": { "body": ["aset(${1:Index}, ${2:Array}, ${3:Element}, ${4:NewArray})$5\n$0" ], "description":"\taset(+Index, +Array, +Element, -NewArray) is det.\n\n\tNewArray is Array with Element added/updated at Index.\n\n\t@compat SICStus-3", "prefix":"aset" }, "dialect/sicstus/arrays:is_array/1": { "body": ["is_array(${1:Term})$2\n$0" ], "description":"\tis_array(@Term) is semidet.\n\n\tTrue if Term is an array\n\n\t@compat SICStus-3", "prefix":"is_array" }, "dialect/sicstus/arrays:new_array/1": { "body": ["new_array(${1:Array})$2\n$0" ], "description":"\tnew_array(-Array) is det.\n\n\t@compat SICStus-3", "prefix":"new_array" }, "dialect/sicstus/block_directive:block/1": { "body": ["block(${1:Heads})$2\n$0" ], "description":"\tblock(+Heads).\n\n\tDeclare predicates to suspend on certain modes. The argument is,\n\tlike meta_predicate/1, a comma-separated list of modes\n\t(_BlockSpecs_). Calls to the predicate is suspended if at least\n\tone of the conditions implies by a blockspec evaluated to\n\t=true=. A blockspec evaluated to =true= iff all arguments\n\tspecified as `-' are unbound.\n\n\tMultiple BlockSpecs for a single predicate can appear in one or\n\tmore :- block declarations. The predicate is suspended untill\n\tall mode patterns that apply to it are satisfied.\n\n\tThe implementation is realised by creating a wrapper that uses\n\twhen/2 to realize suspension of the renamed predicate.\n\n\t@compat SICStus Prolog\n\t@compat If the predicate is blocked on multiple conditions, it\n\t\twill not unblock before _all_ conditions are satisfied.\n\t\tSICStus unblocks when one arbitrary condition is\n\t\tsatisfied.\n\t@bug\tIt is not possible to block on a dynamic predicate\n\t\tbecause we cannot wrap assert/1. Likewise, we cannot\n\t\tblock foreign predicates, although it would be easier\n\t\tto support this.", "prefix":"block" }, "dialect/sicstus/sicstus_lists:append/2": { "body": ["append(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"append('Param1','Param2')", "prefix":"append" }, "dialect/sicstus/sicstus_lists:append/3": { "body": ["append(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"append('Param1','Param2','Param3')", "prefix":"append" }, "dialect/sicstus/sicstus_lists:delete/3": { "body": ["delete(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"delete('Param1','Param2','Param3')", "prefix":"delete" }, "dialect/sicstus/sicstus_lists:flatten/2": { "body": ["flatten(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"flatten('Param1','Param2')", "prefix":"flatten" }, "dialect/sicstus/sicstus_lists:intersection/3": { "body": ["intersection(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"intersection('Param1','Param2','Param3')", "prefix":"intersection" }, "dialect/sicstus/sicstus_lists:is_set/1": { "body": ["is_set(${1:'Param1'})$2\n$0" ], "description":"is_set('Param1')", "prefix":"is_set" }, "dialect/sicstus/sicstus_lists:last/2": { "body": ["last(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"last('Param1','Param2')", "prefix":"last" }, "dialect/sicstus/sicstus_lists:list_to_set/2": { "body": ["list_to_set(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"list_to_set('Param1','Param2')", "prefix":"list_to_set" }, "dialect/sicstus/sicstus_lists:max_list/2": { "body": ["max_list(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"max_list('Param1','Param2')", "prefix":"max_list" }, "dialect/sicstus/sicstus_lists:max_member/2": { "body": ["max_member(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"max_member('Param1','Param2')", "prefix":"max_member" }, "dialect/sicstus/sicstus_lists:member/2": { "body": ["member(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"member('Param1','Param2')", "prefix":"member" }, "dialect/sicstus/sicstus_lists:min_list/2": { "body": ["min_list(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"min_list('Param1','Param2')", "prefix":"min_list" }, "dialect/sicstus/sicstus_lists:min_member/2": { "body": ["min_member(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"min_member('Param1','Param2')", "prefix":"min_member" }, "dialect/sicstus/sicstus_lists:nextto/3": { "body": ["nextto(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"nextto('Param1','Param2','Param3')", "prefix":"nextto" }, "dialect/sicstus/sicstus_lists:nth/3": { "body": ["nth(${1:Index}, ${2:List}, ${3:Element})$4\n$0" ], "description":"\tnth(?Index, ?List, ?Element) is nondet.\n\n\tTrue if Element is the N-th element in List. Counting starts at\n\t1.\n\n\t@deprecated use nth1/3.", "prefix":"nth" }, "dialect/sicstus/sicstus_lists:nth0/3": { "body": ["nth0(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"nth0('Param1','Param2','Param3')", "prefix":"nth0" }, "dialect/sicstus/sicstus_lists:nth0/4": { "body": [ "nth0(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"nth0('Param1','Param2','Param3','Param4')", "prefix":"nth0" }, "dialect/sicstus/sicstus_lists:nth1/3": { "body": ["nth1(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"nth1('Param1','Param2','Param3')", "prefix":"nth1" }, "dialect/sicstus/sicstus_lists:nth1/4": { "body": [ "nth1(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"nth1('Param1','Param2','Param3','Param4')", "prefix":"nth1" }, "dialect/sicstus/sicstus_lists:numlist/3": { "body": ["numlist(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"numlist('Param1','Param2','Param3')", "prefix":"numlist" }, "dialect/sicstus/sicstus_lists:permutation/2": { "body": ["permutation(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"permutation('Param1','Param2')", "prefix":"permutation" }, "dialect/sicstus/sicstus_lists:prefix/2": { "body": ["prefix(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"prefix('Param1','Param2')", "prefix":"prefix" }, "dialect/sicstus/sicstus_lists:proper_length/2": { "body": ["proper_length(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"proper_length('Param1','Param2')", "prefix":"proper_length" }, "dialect/sicstus/sicstus_lists:reverse/2": { "body": ["reverse(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"reverse('Param1','Param2')", "prefix":"reverse" }, "dialect/sicstus/sicstus_lists:same_length/2": { "body": ["same_length(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"same_length('Param1','Param2')", "prefix":"same_length" }, "dialect/sicstus/sicstus_lists:select/3": { "body": ["select(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"select('Param1','Param2','Param3')", "prefix":"select" }, "dialect/sicstus/sicstus_lists:select/4": { "body": [ "select(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"select('Param1','Param2','Param3','Param4')", "prefix":"select" }, "dialect/sicstus/sicstus_lists:selectchk/3": { "body": ["selectchk(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"selectchk('Param1','Param2','Param3')", "prefix":"selectchk" }, "dialect/sicstus/sicstus_lists:selectchk/4": { "body": [ "selectchk(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"selectchk('Param1','Param2','Param3','Param4')", "prefix":"selectchk" }, "dialect/sicstus/sicstus_lists:sublist/2": { "body": ["sublist(${1:Sub}, ${2:List})$3\n$0" ], "description":"\tsublist(?Sub, +List)\n\n\tTrue when all members of Sub are members of List in the same\n\torder.\n\n\t@compat sicstus. The order of generating sublists differs.\n\t@compat This predicate is known in many Prolog implementations,\n\t\tbut the semantics differ. E.g. In YAP it is a continuous\n\t\tsub-list.", "prefix":"sublist" }, "dialect/sicstus/sicstus_lists:subset/2": { "body": ["subset(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"subset('Param1','Param2')", "prefix":"subset" }, "dialect/sicstus/sicstus_lists:substitute/4": { "body": [ "substitute(${1:OldElem}, ${2:List}, ${3:NewElem}, ${4:NewList})$5\n$0" ], "description":"\tsubstitute(+OldElem, +List, +NewElem, -NewList) is det.\n\n\tNewList is as List with all value that are identical (==) to OldElem\n\treplaced by NewList.", "prefix":"substitute" }, "dialect/sicstus/sicstus_lists:subtract/3": { "body": ["subtract(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"subtract('Param1','Param2','Param3')", "prefix":"subtract" }, "dialect/sicstus/sicstus_lists:sum_list/2": { "body": ["sum_list(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"sum_list('Param1','Param2')", "prefix":"sum_list" }, "dialect/sicstus/sicstus_lists:union/3": { "body": ["union(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"union('Param1','Param2','Param3')", "prefix":"union" }, "dialect/sicstus/sicstus_sockets:current_host/1": { "body": ["current_host(${1:Host})$2\n$0" ], "description":"\tcurrent_host(-Host) is det.\n\n\tTrue when Host is an atom that denotes the name of the host.\n", "prefix":"current_host" }, "dialect/sicstus/sicstus_sockets:hostname_address/2": { "body": ["hostname_address(${1:Host}, ${2:Address})$3\n$0" ], "description":"\thostname_address(+Host:atom, -Address:atom) is det.\n\n\tTrue when Address is the IP address of Host.", "prefix":"hostname_address" }, "dialect/sicstus/sicstus_sockets:socket/2": { "body": ["socket(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"socket('Param1','Param2')", "prefix":"socket" }, "dialect/sicstus/sicstus_sockets:socket_accept/2": { "body": ["socket_accept(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"socket_accept('Param1','Param2')", "prefix":"socket_accept" }, "dialect/sicstus/sicstus_sockets:socket_accept/3": { "body": ["socket_accept(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"socket_accept('Param1','Param2','Param3')", "prefix":"socket_accept" }, "dialect/sicstus/sicstus_sockets:socket_bind/2": { "body": ["socket_bind(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"socket_bind('Param1','Param2')", "prefix":"socket_bind" }, "dialect/sicstus/sicstus_sockets:socket_close/1": { "body": ["socket_close(${1:'Param1'})$2\n$0" ], "description":"socket_close('Param1')", "prefix":"socket_close" }, "dialect/sicstus/sicstus_sockets:socket_connect/3": { "body": ["socket_connect(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"socket_connect('Param1','Param2','Param3')", "prefix":"socket_connect" }, "dialect/sicstus/sicstus_sockets:socket_listen/2": { "body": ["socket_listen(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"socket_listen('Param1','Param2')", "prefix":"socket_listen" }, "dialect/sicstus/sicstus_sockets:socket_select/5": { "body": [ "socket_select(${1:TermsSockets}, ${2:NewTermsStreams}, ${3:\n%%\t\t}, ${4:Streams}, ${5:ReadStreams})$6\n$0" ], "description":"\tsocket_select(+TermsSockets, -NewTermsStreams,\n\t\t +TimeOut, +Streams, -ReadStreams) is det.\n\n\tThe list of streams in Streams is checked for readable\n\tcharacters. A stream can be any stream associated with an I/O\n\tdescriptor. The list ReadStreams returns the streams with\n\treadable data. socket_select/5 also waits for connections to the\n\tsockets specified by TermsSockets. This argument should be a\n\tlist of Term-Socket pairs, where Term, which can be any term, is\n\tused as an identifier. NewTermsStreams is a list of\n\tTerm-connection(Client,Stream) pairs, where Stream is a new\n\tstream open for communicating with a process connecting to the\n\tsocket identified with Term, Client is the client host address\n\t(see socket_accept/3). If TimeOut is instantiated to off, the\n\tpredicate waits until something is available. If TimeOut is S:U\n\tthe predicate waits at most S seconds and U microseconds. Both S\n\tand U must be integers >=0. If there is a timeout, ReadStreams\n\tand NewTermsStreams are [].", "prefix":"socket_select" }, "dialect/sicstus/sicstus_system:datime/1": { "body": ["datime(${1:Datime})$2\n$0" ], "description":"\tdatime(-Datime) is det.\n\n\tUnifies Datime with the current date and time as a datime/6\n\trecord of the form datime(Year,Month,Day,Hour,Min,Sec). All\n\tfields are integers.\n\n\t@compat sicstus", "prefix":"datime" }, "dialect/sicstus/sicstus_system:datime/2": { "body": ["datime(${1:When}, ${2:Datime})$3\n$0" ], "description":"\tdatime(+When, -Datime) is det.\n\n\tTrue when Datime is a datime/6 record that reflects the time\n\tstamp When.\n\n\t@compat sicstus", "prefix":"datime" }, "dialect/sicstus/sicstus_system:delete_file/1": { "body": ["delete_file(${1:'Param1'})$2\n$0" ], "description":"delete_file('Param1')", "prefix":"delete_file" }, "dialect/sicstus/sicstus_system:environ/2": { "body": ["environ(${1:Name}, ${2:Value})$3\n$0" ], "description":"\tenviron(?Name, ?Value) is nondet.\n\n\tTrue if Value an atom associated with the environment variable\n\tName.\n\n\t@tbd\tMode -Name is not supported", "prefix":"environ" }, "dialect/sicstus/sicstus_system:exec/3": { "body": ["exec(${1:Command}, ${2:Streams}, ${3:PID})$4\n$0" ], "description":"\texec(+Command, +Streams, -PID)\n\n\tSICStus 3 compatible implementation of exec/3 on top of the\n\tSICStus 4 compatible process_create/3.\n\n\t@bug\tThe SICStus version for Windows seems to hand Command\n\t\tdirectly to CreateProcess(). We hand it to\n\n\t\t ==\n\t\t %COMSPEC% /s /c \"Command\"\n\t\t ==\n\n\t\tIn case of conflict, it is adviced to use\n\t\tprocess_create/3", "prefix":"exec" }, "dialect/sicstus/sicstus_system:file_exists/1": { "body": ["file_exists(${1:FileName})$2\n$0" ], "description":"\tfile_exists(+FileName) is semidet.\n\n\tTrue if a file named FileName exists.\n\n\t@compat sicstus", "prefix":"file_exists" }, "dialect/sicstus/sicstus_system:host_name/1": { "body": ["host_name(${1:HostName})$2\n$0" ], "description":"\thost_name(-HostName)\n\n\t@compat sicstus\n\t@see gethostname/1", "prefix":"host_name" }, "dialect/sicstus/sicstus_system:make_directory/1": { "body": ["make_directory(${1:'Param1'})$2\n$0" ], "description":"make_directory('Param1')", "prefix":"make_directory" }, "dialect/sicstus/sicstus_system:mktemp/2": { "body": ["mktemp(${1:Template}, ${2:File})$3\n$0" ], "description":"\tmktemp(+Template, -File) is det.\n\n\tInterface to the Unix function. This emulation uses\n\ttmp_file/2 and ignores Template.\n\n\t@compat sicstus\n\t@deprecated This interface is a security-risc. Use\n\ttmp_file_stream/3.", "prefix":"mktemp" }, "dialect/sicstus/sicstus_system:now/1": { "body": ["now(${1:When})$2\n$0" ], "description":"\tnow(-When) is det.\n\n\tUnify when with the current time-stamp\n\n\t@compat sicstus", "prefix":"now" }, "dialect/sicstus/sicstus_system:pid/1": { "body": ["pid(${1:PID})$2\n$0" ], "description":"\tpid(-PID)\n\n\tProcess ID of the current process.\n\n\t@compat sicstus.", "prefix":"pid" }, "dialect/sicstus/sicstus_system:popen/3": { "body": ["popen(${1:Command}, ${2:Mode}, ${3:Stream})$4\n$0" ], "description":"\tpopen(+Command, +Mode, ?Stream)\n\n\t@compat sicstus", "prefix":"popen" }, "dialect/sicstus/sicstus_system:rename_file/2": { "body": ["rename_file(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"rename_file('Param1','Param2')", "prefix":"rename_file" }, "dialect/sicstus/sicstus_system:shell/0": {"body": ["shell$1\n$0" ], "description":"shell", "prefix":"shell"}, "dialect/sicstus/sicstus_system:shell/1": { "body": ["shell(${1:'Param1'})$2\n$0" ], "description":"shell('Param1')", "prefix":"shell" }, "dialect/sicstus/sicstus_system:shell/2": { "body": ["shell(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"shell('Param1','Param2')", "prefix":"shell" }, "dialect/sicstus/sicstus_system:sleep/1": { "body": ["sleep(${1:'Param1'})$2\n$0" ], "description":"sleep('Param1')", "prefix":"sleep" }, "dialect/sicstus/sicstus_system:system/0": { "body": ["system$1\n$0" ], "description":"\tsystem.\n\tsystem(+Command).\n\tsystem(+Command, -Status).\n\n\tSynomyms for shell/0, shell/1 and shell/2.\n\n\t@compat sicstus.", "prefix":"system" }, "dialect/sicstus/sicstus_system:system/1": { "body": ["system(${1:Command})$2\n$0" ], "description":"\tsystem.\n\tsystem(+Command).\n\tsystem(+Command, -Status).\n\n\tSynomyms for shell/0, shell/1 and shell/2.\n\n\t@compat sicstus.", "prefix":"system" }, "dialect/sicstus/sicstus_system:system/2": { "body": ["system(${1:Command}, ${2:Status})$3\n$0" ], "description":"\tsystem.\n\tsystem(+Command).\n\tsystem(+Command, -Status).\n\n\tSynomyms for shell/0, shell/1 and shell/2.\n\n\t@compat sicstus.", "prefix":"system" }, "dialect/sicstus/sicstus_system:tmpnam/1": { "body": ["tmpnam(${1:FileName})$2\n$0" ], "description":"\ttmpnam(-FileName)\n\n\tInterface to tmpnam(). This emulation uses tmp_file/2.\n\n\t@compat sicstus\n\t@deprecated This interface is a security-risc. Use\n\ttmp_file_stream/3.", "prefix":"tmpnam" }, "dialect/sicstus/sicstus_system:wait/2": { "body": ["wait(${1:PID}, ${2:Status})$3\n$0" ], "description":"\twait(+PID, -Status)\n\n\tWait for processes created using exec/3.\n\n\t@see exec/3", "prefix":"wait" }, "dialect/sicstus/sicstus_system:working_directory/2": { "body": ["working_directory(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"working_directory('Param1','Param2')", "prefix":"working_directory" }, "dialect/sicstus/sicstus_terms:acyclic_term/1": { "body": ["acyclic_term(${1:'Param1'})$2\n$0" ], "description":"acyclic_term('Param1')", "prefix":"acyclic_term" }, "dialect/sicstus/sicstus_terms:cyclic_term/1": { "body": ["cyclic_term(${1:'Param1'})$2\n$0" ], "description":"cyclic_term('Param1')", "prefix":"cyclic_term" }, "dialect/sicstus/sicstus_terms:subsumes/2": { "body": ["subsumes(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"subsumes('Param1','Param2')", "prefix":"subsumes" }, "dialect/sicstus/sicstus_terms:subsumes_chk/2": { "body": ["subsumes_chk(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"subsumes_chk('Param1','Param2')", "prefix":"subsumes_chk" }, "dialect/sicstus/sicstus_terms:term_factorized/3": { "body": [ "term_factorized(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"term_factorized('Param1','Param2','Param3')", "prefix":"term_factorized" }, "dialect/sicstus/sicstus_terms:term_hash/2": { "body": ["term_hash(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"term_hash('Param1','Param2')", "prefix":"term_hash" }, "dialect/sicstus/sicstus_terms:term_hash/4": { "body": [ "term_hash(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"term_hash('Param1','Param2','Param3','Param4')", "prefix":"term_hash" }, "dialect/sicstus/sicstus_terms:term_size/2": { "body": ["term_size(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"term_size('Param1','Param2')", "prefix":"term_size" }, "dialect/sicstus/sicstus_terms:term_subsumer/3": { "body": ["term_subsumer(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"term_subsumer('Param1','Param2','Param3')", "prefix":"term_subsumer" }, "dialect/sicstus/sicstus_terms:term_variables/2": { "body": ["term_variables(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"term_variables('Param1','Param2')", "prefix":"term_variables" }, "dialect/sicstus/sicstus_terms:term_variables/3": { "body": ["term_variables(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"term_variables('Param1','Param2','Param3')", "prefix":"term_variables" }, "dialect/sicstus/sicstus_terms:term_variables_bag/2": { "body": ["term_variables_bag(${1:Term}, ${2:Variables})$3\n$0" ], "description":"\tterm_variables_bag(+Term, -Variables) is det.\n\n\tVariables is a list of variables that appear in Term. The\n\tvariables are ordered according to depth-first lef-right walking\n\tof the term. Variables contains no duplicates. This is the same\n\tas SWI-Prolog's term_variables.", "prefix":"term_variables_bag" }, "dialect/sicstus/sicstus_terms:variant/2": { "body": ["variant(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"variant('Param1','Param2')", "prefix":"variant" }, "dialect/sicstus/timeout:time_out/3": { "body": ["time_out(${1:Goal}, ${2:Time_ms}, ${3:Result})$4\n$0" ], "description":"\ttime_out(:Goal, +Time_ms, -Result) is nondet.\n\n\tThis library provides a SICStus compatible implementation of\n\ttime-outs. Unfortunately, it is not fully compatible:\n\n\t * Time is measured in wall-time instead of virtual CPU.\n\n\tVirtual CPU time is hard in threaded-environments. On most\n\tsystems, you probably need a thread that measures the CPU usage\n\tof the monitored thread.", "prefix":"time_out" }, "dialect/sicstus:bb_delete/2": { "body": ["bb_delete(${1:Name}, ${2:Value})$3\n$0" ], "description":"\tbb_put(:Name, +Value) is det.\n\tbb_get(:Name, -Value) is semidet.\n\tbb_delete(:Name, -Value) is semidet.\n\tbb_update(:Name, -Old, +New) is semidet.\n\n\tSICStus compatible blackboard routines. The implementations only\n\tdeal with cases where the module-sensitive key is unknown and\n\tmeta-calling. Simple cases are directly mapped to SWI-Prolog\n\tnon-backtrackable global variables.", "prefix":"bb_delete" }, "dialect/sicstus:bb_get/2": { "body": ["bb_get(${1:Name}, ${2:Value})$3\n$0" ], "description":"\tbb_put(:Name, +Value) is det.\n\tbb_get(:Name, -Value) is semidet.\n\tbb_delete(:Name, -Value) is semidet.\n\tbb_update(:Name, -Old, +New) is semidet.\n\n\tSICStus compatible blackboard routines. The implementations only\n\tdeal with cases where the module-sensitive key is unknown and\n\tmeta-calling. Simple cases are directly mapped to SWI-Prolog\n\tnon-backtrackable global variables.", "prefix":"bb_get" }, "dialect/sicstus:bb_put/2": { "body": ["bb_put(${1:Name}, ${2:Value})$3\n$0" ], "description":"\tbb_put(:Name, +Value) is det.\n\tbb_get(:Name, -Value) is semidet.\n\tbb_delete(:Name, -Value) is semidet.\n\tbb_update(:Name, -Old, +New) is semidet.\n\n\tSICStus compatible blackboard routines. The implementations only\n\tdeal with cases where the module-sensitive key is unknown and\n\tmeta-calling. Simple cases are directly mapped to SWI-Prolog\n\tnon-backtrackable global variables.", "prefix":"bb_put" }, "dialect/sicstus:bb_update/3": { "body": ["bb_update(${1:Name}, ${2:Old}, ${3:New})$4\n$0" ], "description":"\tbb_put(:Name, +Value) is det.\n\tbb_get(:Name, -Value) is semidet.\n\tbb_delete(:Name, -Value) is semidet.\n\tbb_update(:Name, -Old, +New) is semidet.\n\n\tSICStus compatible blackboard routines. The implementations only\n\tdeal with cases where the module-sensitive key is unknown and\n\tmeta-calling. Simple cases are directly mapped to SWI-Prolog\n\tnon-backtrackable global variables.", "prefix":"bb_update" }, "dialect/sicstus:block/1": { "body": ["block(${1:'Param1'})$2\n$0" ], "description":"block('Param1')", "prefix":"block" }, "dialect/sicstus:create_mutable/2": { "body": ["create_mutable(${1:Value}, ${2:Mutable})$3\n$0" ], "description":"\tcreate_mutable(?Value, -Mutable) is det.\n\n\tCreate a mutable term with the given initial Value.\n\n\t@compat sicstus", "prefix":"create_mutable" }, "dialect/sicstus:get_mutable/2": { "body": ["get_mutable(${1:Value}, ${2:Mutable})$3\n$0" ], "description":"\tget_mutable(?Value, +Mutable) is semidet.\n\n\tTrue if Value unifies with the current value of Mutable.\n\n\t@compat sicstus", "prefix":"get_mutable" }, "dialect/sicstus:if/3": { "body": ["if(${1:If}, ${2:Then}, ${3:Else})$4\n$0" ], "description":"\tif(:If, :Then, :Else)\n\n\tSame as SWI-Prolog soft-cut construct. Normally, this is\n\ttranslated using goal-expansion. If either term contains a !, we\n\tuse meta-calling for full compatibility (i.e., scoping the cut).", "prefix":"if" }, "dialect/sicstus:prolog_flag/2": { "body": ["prolog_flag(${1:Flag}, ${2:Value})$3\n$0" ], "description":"\tprolog_flag(+Flag, -Value) is semidet.\n\n\tQuery a Prolog flag, mapping SICSTus flags to SWI-Prolog flags", "prefix":"prolog_flag" }, "dialect/sicstus:prolog_flag/3": { "body": ["prolog_flag(${1:Flag}, ${2:Old}, ${3:New})$4\n$0" ], "description":"\tprolog_flag(+Flag, -Old, +New) is semidet.\n\n\tQuery and set a Prolog flag. Use the debug/1 topic =prolog_flag=\n\tto find the flags accessed using this predicate.", "prefix":"prolog_flag" }, "dialect/sicstus:read_line/1": { "body": ["read_line(${1:Codes})$2\n$0" ], "description":"\tread_line(-Codes) is det.\n\tread_line(+Stream, -Codes) is det.\n\n\tRead a line from the given or current input. The line read does\n\t_not_ include the line-termination character. Unifies Codes with\n\t=end_of_file= if the end of the input is reached.\n\n\t@compat sicstus\n\t@see\tThe SWI-Prolog primitive is read_line_to_codes/2.", "prefix":"read_line" }, "dialect/sicstus:read_line/2": { "body": ["read_line(${1:Stream}, ${2:Codes})$3\n$0" ], "description":"\tread_line(-Codes) is det.\n\tread_line(+Stream, -Codes) is det.\n\n\tRead a line from the given or current input. The line read does\n\t_not_ include the line-termination character. Unifies Codes with\n\t=end_of_file= if the end of the input is reached.\n\n\t@compat sicstus\n\t@see\tThe SWI-Prolog primitive is read_line_to_codes/2.", "prefix":"read_line" }, "dialect/sicstus:trimcore/0": { "body": ["trimcore$1\n$0" ], "description":"\ttrimcore\n\n\tTrims the stacks. Other tasks of the SICStus trimcore/0 are\n\tautomatically scheduled by SWI-Prolog.", "prefix":"trimcore" }, "dialect/sicstus:update_mutable/2": { "body": ["update_mutable(${1:Value}, ${2:Mutable})$3\n$0" ], "description":"\tupdate_mutable(?Value, !Mutable) is det.\n\n\tSet the value of Mutable to Value. The old binding is\n\trestored on backtracking.\n\n\t@see setarg/3.\n\t@compat sicstus", "prefix":"update_mutable" }, "dialect/sicstus:use_module/3": { "body": ["use_module(${1:Module}, ${2:File}, ${3:Imports})$4\n$0" ], "description":"\tuse_module(+Module, -File, +Imports) is det.\n\tuse_module(-Module, +File, +Imports) is det.\n\n\tThis predicate can be used to import from a named module while\n\tthe file-location of the module is unknown or to get access to\n\tthe module-name loaded from a file.\n\n\tIf both Module and File are given, we use Module and try to\n\tunify File with the absolute canonical path to the file from\n\twhich Module was loaded. However, we succeed regardless of the\n\tsuccess of this unification.", "prefix":"use_module" }, "dialect/yap:assert_static/1": { "body": ["assert_static(${1:Term})$2\n$0" ], "description":"\tassert_static(:Term)\n\n\tAssert as static predicate. SWI-Prolog provides\n\tcompile_predicates/1 to achieve this. The emulation is a mere\n\talias for assert/1, as immediate compilation would prohibit\n\tfurther calls to this predicate.\n\n\t@compat yap\n\t@deprecated Use assert/1 and compile_predicates/1 after\n\tcompleting the predicate definition.", "prefix":"assert_static" }, "dialect/yap:depth_bound_call/2": { "body": ["depth_bound_call(${1:Goal}, ${2:Limit})$3\n$0" ], "description":"\tdepth_bound_call(:Goal, :Limit)\n\n\tEquivalent to call_with_depth_limit(Goal, Limit, _Reached)\n\n\t@compat yap", "prefix":"depth_bound_call" }, "dialect/yap:exists/1": { "body": ["exists(${1:File})$2\n$0" ], "description":"\texists(+File)\n\n\tEquivalent to exists_file(File).\n\n\t@compat yap", "prefix":"exists" }, "dialect/yap:gc/0": { "body": ["gc$1\n$0" ], "description":"\tgc\n\n\tGarbage collect.\n\n\t@compat yap", "prefix":"gc" }, "dialect/yap:source/0": { "body": ["source$1\n$0" ], "description":"\tsource is det.\n\n\tYAP directive to maintain source-information. We have that\n\talways.", "prefix":"source" }, "dialect/yap:system/1": { "body": ["system(${1:Command})$2\n$0" ], "description":"\tsystem(+Command)\n\n\tEquivalent to shell(Command).\n\n\t@compat yap", "prefix":"system" }, "dialect/yap:yap_flag/2": { "body": ["yap_flag(${1:Key}, ${2:Value})$3\n$0" ], "description":"\tyap_flag(+Key, +Value) is det.\n\n\tMap some YAP flags to SWI-Prolog. Supported flags:\n\n\t * write_strings: Bool\n\t If =on=, writes strings as \"...\" instead of a list of\n\t integers. In SWI-Prolog this only affects write routines\n\t that use portray.", "prefix":"yap_flag" }, "dialect/yap:yap_style_check/1": { "body": ["yap_style_check(${1:Style})$2\n$0" ], "description":"\tyap_style_check(+Style) is det.\n\n\tMap YAP style-check options onto the SWI-Prolog ones.", "prefix":"yap_style_check" }, "dict_create/3": { "body":"dict_create(${1:Dict}, ${2:Tag}, ${3:Data})$4\n$0", "description":"dict_create(-Dict, +Tag, +Data).\nCreate a dict in Tag from Data. Data is a list of attribute-value pairs using the syntax Key:Value, Key=Value, Key-Value or Key(Value). An exception is raised if Data is not a proper list, one of the elements is not of the shape above, a key is neither an atom nor a small integer or there is a duplicate key.", "prefix":"dict_create" }, "dict_pairs/3": { "body":"dict_pairs(${1:Dict}, ${2:Tag}, ${3:Pairs})$4\n$0", "description":"dict_pairs(?Dict, ?Tag, ?Pairs).\nBi-directional mapping between a dict and an ordered list of pairs (see section A.21).", "prefix":"dict_pairs" }, "dicts:dict_fill/4": { "body": ["dict_fill(${1:ValueIn}, ${2:Key}, ${3:Dict}, ${4:Value})$5\n$0" ], "description":" dict_fill(+ValueIn, +Key, +Dict, -Value) is det.\n\n Implementation for the dicts_to_same_keys/3 `OnEmpty` closure\n that fills new cells with a copy of ValueIn. Note that\n copy_term/2 does not really copy ground terms. Below are two\n examples. Note that when filling empty cells with a variable,\n each empty cell is bound to a new variable.\n\n ==\n ?- dicts_to_same_keys([r{x:1}, r{y:2}], dict_fill(null), L).\n L = [r{x:1, y:null}, r{x:null, y:2}].\n ?- dicts_to_same_keys([r{x:1}, r{y:2}], dict_fill(_), L).\n L = [r{x:1, y:_G2005}, r{x:_G2036, y:2}].\n ==\n\n Use dict_no_fill/3 to raise an error if a dict is missing a key.", "prefix":"dict_fill" }, "dicts:dict_keys/2": { "body": ["dict_keys(${1:Dict}, ${2:Keys})$3\n$0" ], "description":" dict_keys(+Dict, -Keys) is det.\n\n True when Keys is an ordered set of the keys appearing in Dict.", "prefix":"dict_keys" }, "dicts:dict_no_fill/3": { "body": ["dict_no_fill(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"dict_no_fill('Param1','Param2','Param3')", "prefix":"dict_no_fill" }, "dicts:dicts_join/3": { "body": ["dicts_join(${1:Key}, ${2:DictsIn}, ${3:Dicts})$4\n$0" ], "description":" dicts_join(+Key, +DictsIn, -Dicts) is semidet.\n\n Join dicts in Dicts that have the same value for Key, provided\n they do not have conflicting values on other keys. For example:\n\n ==\n ?- dicts_join(x, [r{x:1, y:2}, r{x:1, z:3}, r{x:2,y:4}], L).\n L = [r{x:1, y:2, z:3}, r{x:2, y:4}].\n ==\n\n @error existence_error(key, Key, Dict) if a dict in Dicts1\n or Dicts2 does not contain Key.", "prefix":"dicts_join" }, "dicts:dicts_join/4": { "body": ["dicts_join(${1:Key}, ${2:Dicts1}, ${3:Dicts2}, ${4:Dicts})$5\n$0" ], "description":" dicts_join(+Key, +Dicts1, +Dicts2, -Dicts) is semidet.\n\n Join two lists of dicts (Dicts1 and Dicts2) on Key. Each pair\n D1-D2 from Dicts1 and Dicts2 that have the same (==) value for\n Key creates a new dict D with the union of the keys from D1 and\n D2, provided D1 and D2 to not have conflicting values for some\n key. For example:\n\n ==\n ?- DL1 = [r{x:1,y:1},r{x:2,y:4}],\n DL2 = [r{x:1,z:2},r{x:3,z:4}],\n dicts_join(x, DL1, DL2, DL).\n DL = [r{x:1, y:1, z:2}, r{x:2, y:4}, r{x:3, z:4}].\n ==\n\n @error existence_error(key, Key, Dict) if a dict in Dicts1\n or Dicts2 does not contain Key.", "prefix":"dicts_join" }, "dicts:dicts_same_keys/2": { "body": ["dicts_same_keys(${1:List}, ${2:Keys})$3\n$0" ], "description":" dicts_same_keys(+List, -Keys) is semidet.\n\n True if List is a list of dicts that all have the same keys and\n Keys is an ordered set of these keys.", "prefix":"dicts_same_keys" }, "dicts:dicts_same_tag/2": { "body": ["dicts_same_tag(${1:List}, ${2:Tag})$3\n$0" ], "description":" dicts_same_tag(+List, -Tag) is semidet.\n\n True when List is a list of dicts that all have the tag Tag.", "prefix":"dicts_same_tag" }, "dicts:dicts_slice/3": { "body": ["dicts_slice(${1:Keys}, ${2:DictsIn}, ${3:DictsOut})$4\n$0" ], "description":" dicts_slice(+Keys, +DictsIn, -DictsOut) is det.\n\n DictsOut is a list of Dicts only containing values for Keys.", "prefix":"dicts_slice" }, "dicts:dicts_to_compounds/4": { "body": [ "dicts_to_compounds(${1:Dicts}, ${2:Keys}, ${3:OnEmpty}, ${4:Compounds})$5\n$0" ], "description":" dicts_to_compounds(?Dicts, +Keys, :OnEmpty, ?Compounds) is semidet.\n\n True when Dicts and Compounds are lists of the same length and\n each element of Compounds is a compound term whose arguments\n represent the values associated with the corresponding keys in\n Keys. When converting from dict to row, OnEmpty is used to\n compute missing values. The functor for the compound is the same\n as the tag of the pair. When converting from dict to row and the\n dict has no tag, the functor `row` is used. For example:\n\n ==\n ?- Dicts = [_{x:1}, _{x:2, y:3}],\n dicts_to_compounds(Dicts, [x], dict_fill(null), Compounds).\n Compounds = [row(1), row(2)].\n ?- Dicts = [_{x:1}, _{x:2, y:3}],\n dicts_to_compounds(Dicts, [x,y], dict_fill(null), Compounds).\n Compounds = [row(1, null), row(2, 3)].\n ?- Compounds = [point(1,1), point(2,4)],\n dicts_to_compounds(Dicts, [x,y], dict_fill(null), Compounds).\n Dicts = [point{x:1, y:1}, point{x:2, y:4}].\n ==\n\n When converting from Dicts to Compounds Keys may be computed by\n dicts_same_keys/2.", "prefix":"dicts_to_compounds" }, "dicts:dicts_to_same_keys/3": { "body": [ "dicts_to_same_keys(${1:DictsIn}, ${2:OnEmpty}, ${3:DictsOut})$4\n$0" ], "description":" dicts_to_same_keys(+DictsIn, :OnEmpty, -DictsOut)\n\n DictsOut is a copy of DictsIn, where each dict contains all keys\n appearing in all dicts of DictsIn. Values for keys that are\n added to a dict are produced by calling OnEmpty as below. The\n predicate dict_fill/4 provides an implementation that fills all\n new cells with a predefined value.\n\n ==\n call(:OnEmpty, +Key, +Dict, -Value)\n ==", "prefix":"dicts_to_same_keys" }, "dif/2": { "body":"dif(${1:A}, ${2:B})$3\n$0", "description":"dif(@A, @B).\nThe dif/2 predicate is a constraint that is true if and only if A and B are different terms. If A and B can never unify, dif/2 succeeds deterministically. If A and B are identical, it fails immediately. Finally, if A and B can unify, goals are delayed that prevent A and B to become equal. It is this last property that makes dif/2 a more general and more declarative alternative for \\=/2 and related predicates. This predicate behaves as if defined by dif(X, Y) :- when(?=(X,Y), X \\== Y). See also ?=/2. The implementation can deal with cyclic terms. \n\nThe dif/2 predicate is realised using attributed variables associated with the module dif. It is an autoloaded predicate that is defined in the library library(dif).\n\n", "prefix":"dif" }, "dif:dif/2": { "body": ["dif(${1:Term1}, ${2:Term2})$3\n$0" ], "description":" dif(+Term1, +Term2) is semidet.\n\n Constraint that expresses that Term1 and Term2 never become\n identical (==/2). Fails if `Term1 == Term2`. Succeeds if Term1\n can never become identical to Term2. In other cases the\n predicate succeeds after attaching constraints to the relevant\n parts of Term1 and Term2 that prevent the two terms to become\n identical.", "prefix":"dif" }, "directory_files/2": { "body":"directory_files(${1:Directory}, ${2:Entries})$3\n$0", "description":"directory_files(+Directory, -Entries).\nUnify Entries with a list of entries in Directory. Each member of Entries is an atom denoting an entry relative to Directory. Entries contains all entries, including hidden files and, if supplied by the OS, the special entries . and ... See also expand_file_name/2.129This predicate should be considered a misnomer because it returns entries rather than files. We stick to this name for compatibility with, e.g., SICStus, Ciao and YAP.", "prefix":"directory_files" }, "divmod/4": { "body":"divmod(${1:Dividend}, ${2:Divisor}, ${3:Quotient}, ${4:Remainder})$5\n$0", "description":"divmod(+Dividend, +Divisor, -Quotient, -Remainder).\nThis predicate is a shorthand for computing both the Quotient and Remainder of two integers in a single operation. This allows for exploiting the fact that the low level implementation for computing the quotient also produces the remainder. Timing confirms that this predicate is almost twice as fast as performing the steps independently. Semantically, divmod/4 is defined as below. \n\ndivmod(Dividend, Divisor, Quotient, Remainder) :-\n Quotient is Dividend div Divisor,\n Remainder is Dividend mod Divisor.\n\n Note that this predicate is only available if SWI-Prolog is compiled with unbounded integer support. This is the case for all packaged versions.\n\n", "prefix":"divmod" }, "doc_browser/0": { "body":"doc_browser$1\n$0", "description":"doc_browser.\nOpen the user's default browser on the running documentation server. Fails if no server is running.", "prefix":"doc_browser" }, "doc_browser/1": { "body":"doc_browser(${1:Spec})$2\n$0", "description":"doc_browser(+Spec).\nOpen the user's default browser on the specified page. Spec is handled similar to edit/1, resolving anything that relates somehow to the given specification and ask the user to select.bugThis flexibility is not yet implemented.", "prefix":"doc_browser" }, "doc_collect/1": { "body":"doc_collect(${1:Bool})$2\n$0", "description":"doc_collect(+Bool).\nEnable/disable collecting structured comments into the Prolog database. See doc_server/1 or doc_browser/0 for the normal way to deploy the server in your application. All these predicates must be called before loading your program.", "prefix":"doc_collect" }, "doc_files:doc_latex/3": { "body":"doc_latex(${1:Spec}, ${2:OutFile}, ${3:Options})$4\n$0", "description":"[det]doc_latex(+Spec, +OutFile, +Options).\nProcess one or more objects, writing the LaTeX output to OutFile. Spec is one of: Name / Arity: Generate documentation for predicate\n\nName // Arity: Generate documentation for DCG rule\n\nFile: If File is a prolog file (as defined by user:prolog_file_type/2), process using latex_for_file/3, otherwise process using latex_for_wiki_file/3.\n\n Typically Spec is either a list of filenames or a list of predicate indicators. Defined options are: \n\nstand_alone(+Bool): If true (default), create a document that can be run through LaTeX. If false, produce a document to be included in another LaTeX document.\n\npublic_only(+Bool): If true (default), only emit documentation for exported predicates.\n\nsection_level(+Level): Outermost section level produced. Level is the name of a LaTeX section command. Default is section.\n\nsummary(+File): Write summary declarations to the named File.\n\n ", "prefix":"doc_latex" }, "doc_files:doc_load_library/0": { "body":"doc_load_library$1\n$0", "description":"doc_load_library.\nLoad all library files. This is intended to set up a local documentation server. A typical scenario, making the server available at port 4000 of the hosting machine from all locations in a domain is given below. \n\n:- doc_server(4000,\n [ allow('.my.org')\n ]).\n:- use_module(library(pldoc/doc_library)).\n:- doc_load_library.\n\n Example code can be found in $PLBASE/doc/packages/examples/pldoc.\n\n", "prefix":"doc_load_library" }, "doc_files:doc_pack/1": { "body":"doc_pack(${1:Pack})$2\n$0", "description":"doc_pack(+Pack).\nGenerate stand-alone documentation for the package Pack. The documentation is generated in a directory doc inside the pack. The index page consists of the content of readme or readme.txt in the main directory of the pack and an index of all files and their public predicates.", "prefix":"doc_pack" }, "doc_files:doc_save/2": { "body":"doc_save(${1:FileOrDir}, ${2:Options})$3\n$0", "description":"doc_save(+FileOrDir, +Options).\nSave documentation for FileOrDir to file(s). Options include format(+Format): Currently only supports html.\n\ndoc_root(+Dir): Save output to the given directory. Default is to save the documentation for a file in the same directory as the file and for a directory in a subdirectory doc.\n\ntitle(+Title): Title is an atom that provides the HTML title of the main (index) page. Only meaningful when generating documentation for a directory.\n\nman_server(+RootURL): Root of a manual server used for references to built-in predicates. Default is http://www.swi-prolog.org/pldoc/\n\nindex_file(+Base): Filename for directory indices. Default is index.\n\nif(Condition): What to do with files in a directory. loaded (default) only documents files loaded into the Prolog image. true documents all files.\n\nrecursive(+Bool): If true, recurse into subdirectories.\n\ncss(+Mode): If copy, copy the CSS file to created directories. Using inline, include the CSS file into the created files. Currently, only the default copy is supported.\n\n The typical use-case is to document the Prolog files that belong to a project in the current directory. To do this load the Prolog files and run the goal below. This creates a sub-directory doc with an index file index.html. It replicates the directory structure of the source directory, creating an HTML file for each Prolog file and an index file for each sub-directory. A copy of the required CSS and image resources is copied to the doc directory. \n\n\n\n?- doc_save(., [recursive(true)]).\n\n ", "prefix":"doc_save" }, "doc_files:latex_for_file/3": { "body":"latex_for_file(${1:File}, ${2:Out}, ${3:Options})$4\n$0", "description":"[det]latex_for_file(+File, +Out, +Options).\nGenerate a LaTeX description of all commented predicates in File, writing the LaTeX text to the stream Out. Supports the options stand_alone, public_only and section_level. See doc_latex/3 for a description of the options.", "prefix":"latex_for_file" }, "doc_files:latex_for_predicates/3": { "body":"latex_for_predicates(${1:PI}, ${2:Out}, ${3:Options})$4\n$0", "description":"[det]latex_for_predicates(+PI:list, +Out, +Options).\nGenerate LaTeX for a list of predicate indicators. This does not produce the \\begin{description}...\\end{description} environment, just a plain list of \\predicate, etc. statements. The current implementation ignores Options.", "prefix":"latex_for_predicates" }, "doc_files:latex_for_wiki_file/3": { "body":"latex_for_wiki_file(${1:File}, ${2:Out}, ${3:Options})$4\n$0", "description":"[det]latex_for_wiki_file(+File, +Out, +Options).\nWrite a LaTeX translation of a Wiki file to the steam Out. Supports the options stand_alone, public_only and section_level. See doc_latex/3 for a description of the options.", "prefix":"latex_for_wiki_file" }, "doc_load_library/0": { "body":"doc_load_library$1\n$0", "description":"doc_load_library.\nLoad all library files. This is intended to set up a local documentation server. A typical scenario, making the server available at port 4000 of the hosting machine from all locations in a domain is given below. \n\n:- doc_server(4000,\n [ allow('.my.org')\n ]).\n:- use_module(library(pldoc/doc_library)).\n:- doc_load_library.\n\n Example code can be found in $PLBASE/doc/packages/examples/pldoc.\n\n", "prefix":"doc_load_library" }, "doc_server/1": { "body":"doc_server(${1:Port})$2\n$0", "description":"doc_server(?Port).\nStart documentation server at Port. Same as doc_server(Port, [allow(localhost), workers(1)]). This predicate must be called before loading the program for which you consult the documentation. It calls doc_collect/1 to start collecting documentation while (re-)loading your program.", "prefix":"doc_server" }, "doc_server/2": { "body":"doc_server(${1:Port}, ${2:Options})$3\n$0", "description":"doc_server(?Port, +Options).\nStart documentation server at Port using Options. Provided options are: root(+Path): Defines the root of all locations served by the HTTP server. Default is /. Path must be an absolute URL location, starting with / and ending in /. Intented for public services behind a reverse proxy. See documentation of the HTTP package for details on using reverse proxies.\n\nedit(+Bool): If false, do not allow editing, even if the connection comes from localhost. Intended together with the root option to make pldoc available from behind a reverse proxy. See the HTTP package for configuring a Prolog server behind an Apache reverse proxy.\n\nallow(+HostOrIP): Allow connections from HostOrIP. If Host is an atom starting with a '.', suffix matching is preformed. I.e. allow('.uva.nl') grants access to all machines in this domain. IP addresses are specified using the library(socket) ip/4 term. I.e. to allow access from the 10.0.0.X domain, specify allow(ip(10,0,0,_)).\n\ndeny(+HostOrIP): Deny access from the given location. Matching is equal to the allow option.\n\n Access is granted iff \n\n\n\nBoth deny and allow match\nallow exists and matches\nallow does not exist and deny does not match.\n\n", "prefix":"doc_server" }, "double_metaphone/2": { "body":"double_metaphone(${1:In}, ${2:MetaPhone})$3\n$0", "description":"double_metaphone(+In, -MetaPhone).\nSame as double_metaphone/3, but only returning the primary metaphone.", "prefix":"double_metaphone" }, "double_metaphone/3": { "body":"double_metaphone(${1:In}, ${2:MetaPhone}, ${3:AltMetaphone})$4\n$0", "description":"double_metaphone(+In, -MetaPhone, -AltMetaphone).\nCreate metaphone and alternative metaphone from In. The primary metaphone is based on english, while the secondary deals with common alternative pronounciation in other languages. In is either and atom, string object, code- or character list. The metaphones are always returned as atoms.", "prefix":"double_metaphone" }, "double_metaphone:double_metaphone/2": { "body": ["double_metaphone(${1:In}, ${2:MetaPhone})$3\n$0" ], "description":" double_metaphone(+In, -MetaPhone) is det.\n\n Same as double_metaphone/3, but only returning the primary\n metaphone.", "prefix":"double_metaphone" }, "double_metaphone:double_metaphone/3": { "body": [ "double_metaphone(${1:In}, ${2:MetaPhone}, ${3:AltMetaphone})$4\n$0" ], "description":" double_metaphone(+In, -MetaPhone, -AltMetaphone) is det.\n\n Create metaphone and alternative metaphone from In. The primary\n metaphone is based on english, while the secondary deals with\n common alternative pronounciation in other languages. In is\n either and atom, string object, code- or character list. The\n metaphones are always returned as atoms.", "prefix":"double_metaphone" }, "downcase_atom/2": { "body":"downcase_atom(${1:AnyCase}, ${2:LowerCase})$3\n$0", "description":"downcase_atom(+AnyCase, -LowerCase).\nConverts the characters of AnyCase into lowercase as char_type/2 does (i.e. based on the defined locale if Prolog provides locale support on the hosting platform) and unifies the lowercase atom with LowerCase.", "prefix":"downcase_atom" }, "draw_extend:draw_begin_shape/4": { "body": [ "draw_begin_shape(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"draw_begin_shape('Param1','Param2','Param3','Param4')", "prefix":"draw_begin_shape" }, "draw_extend:draw_end_shape/0": { "body": ["draw_end_shape$1\n$0" ], "description":"draw_end_shape", "prefix":"draw_end_shape" }, "dtd/2": { "body":"dtd(${1:DocType}, ${2:DTD})$3\n$0", "description":"dtd(+DocType, -DTD).\nFind the DTD representing the indicated doctype. This predicate uses a cache of DTD objects. If a doctype has no associated dtd, it searches for a file using the file search path dtd using the call: \n\n...,\nabsolute_file_name(dtd(Type),\n [ extensions([dtd]),\n access(read)\n ], DtdFile),\n...\n\n Note that DTD objects may be modified while processing errornous documents. For example, loading an SGML document starting with switches the DTD to XML mode and encountering unknown elements adds these elements to the DTD object. Re-using a DTD object to parse multiple documents should be restricted to situations where the documents processed are known to be error-free. \n\nThe DTD html is handled seperately. The Prolog flag html_dialect specifies the default html dialect, which is either html4 or html5 (default).3Note that HTML5 has no DTD. The loaded DTD is an informal DTD that includes most of the HTML5 extensions (http://www.cs.tut.fi/~jkorpela/html5-dtd.html). In addition, the parser sets the dialect flag of the DTD object. This is used by the parser to accept HTML extensions. Next, the corresponding DTD is loaded.\n\n", "prefix":"dtd" }, "dtd_property/2": { "body":"dtd_property(${1:DTD}, ${2:Property})$3\n$0", "description":"dtd_property(+DTD, ?Property).\nThis predicate is used to examine the content of a DTD. Property is one of: doctype(DocType): An atom representing the document-type defined by this DTD.\n\nelements(ListOfElements): A list of atoms representing the names of the elements in this DTD.\n\nelement(Name, Omit, Content): The DTD contains an element with the given name. Omit is a term of the format omit(OmitOpen, OmitClose), where both arguments are booleans (true or false representing whether the open- or close-tag may be omitted. Content is the content-model of the element represented as a Prolog term. This term takes the following form: emptyThe element has no content.cdataThe element contains non-parsed character data. All data up to the matching end-tag is included in the data (declared content).rcdataAs cdata, but entity-references are expanded.anyThe element may contain any number of any element from the DTD in any order.#pcdataThe element contains parsed character data .element(element)n element with this name.*(SubModel)0 or more appearances.?(SubModel)0 or one appearance.+(SubModel)1 or more appearances.,(SubModel1, SubModel2)SubModel1 followed by SubModel2.&(SubModel1, SubModel2)SubModel1 and SubModel2 in any order.|(SubModel1, SubModel2)SubModel1 or SubModel2. \n\nattributes(Element, ListOfAttributes): ListOfAttributes is a list of atoms representing the attributes of the element Element.\n\nattribute(Element, Attribute, Type, Default): Query an element. Type is one of cdata, entity, id, idref, name, nmtoken, notation, number or nutoken. For DTD types that allow for a list, the notation list(Type) is used. Finally, the DTD construct (a|b|...) is mapped to the term nameof(ListOfValues). Default describes the sgml default. It is one required, current, conref or implied. If a real default is present, it is one of default(Value) or fixed(Value).\n\nentities(ListOfEntities): ListOfEntities is a list of atoms representing the names of the defined entities.\n\nentity(Name, Value): Name is the name of an entity with given value. Value is one of AtomIf the value is atomic, it represents the literal value of the entity.system(Url)Url is the URL of the system external entity.public(Id, Url)For external public entities, Id is the identifier. If an URL is provided this is returned in Url. Otherwise this argument is unbound. \n\nnotations(ListOfNotations): Returns a list holding the names of all NOTATION declarations.\n\nnotation(Name, Decl): Unify Decl with a list if system(+File) and/or public(+PublicId).\n\n ", "prefix":"dtd_property" }, "duplicate_term/2": { "body":"duplicate_term(${1:In}, ${2:Out})$3\n$0", "description":"duplicate_term(+In, -Out).\nVersion of copy_term/2 that also copies ground terms and therefore ensures that destructive modification using setarg/3 does not affect the copy. See also nb_setval/2, nb_linkval/2, nb_setarg/3 and nb_linkarg/3.", "prefix":"duplicate_term" }, "dwim_match/2": { "body":"dwim_match(${1:Atom1}, ${2:Atom2})$3\n$0", "description":"dwim_match(+Atom1, +Atom2).\nTrue if Atom1 matches Atom2 in the `Do What I Mean' sense. Both Atom1 and Atom2 may also be integers or floats. The two atoms match if: They are identical\nThey differ by one character (spy == spu)\nOne character is inserted/deleted (debug == deug)\nTwo characters are transposed (trace == tarce)\n`Sub-words' are glued differently (existsfile == existsFile == exists_file)\nTwo adjacent sub-words are transposed (existsFile == fileExists)\n\n", "prefix":"dwim_match" }, "dwim_match/3": { "body":"dwim_match(${1:Atom1}, ${2:Atom2}, ${3:Difference})$4\n$0", "description":"dwim_match(+Atom1, +Atom2, -Difference).\nEquivalent to dwim_match/2, but unifies Difference with an atom identifying the difference between Atom1 and Atom2. The return values are (in the same order as above): equal, mismatched_char, inserted_char, transposed_char, separated and transposed_word.", "prefix":"dwim_match" }, "dwim_predicate/2": { "body":"dwim_predicate(${1:Term}, ${2:Dwim})$3\n$0", "description":"dwim_predicate(+Term, -Dwim).\n`Do What I Mean' (`dwim') support predicate. Term is a term, whose name and arity are used as a predicate specification. Dwim is instantiated with the most general term built from Name and the arity of a defined predicate that matches the predicate specified by Term in the `Do What I Mean' sense. See dwim_match/2 for `Do What I Mean' string matching. Internal system predicates are not generated, unless the access level is system (see access_level). Backtracking provides all alternative matches.", "prefix":"dwim_predicate" }, "e/0": { "body":"e$1\n$0", "description":"e.\nEvaluate to the mathematical constant e (2.71828 ... ).", "prefix":"e" }, "edinburgh:debug/0": { "body": ["debug$1\n$0" ], "description":" debug is det.\n nodebug is det.\n\n Switch on/off debug mode. Note that nodebug/0 has been defined\n such that is is not traced itself.", "prefix":"debug" }, "edinburgh:display/1": { "body": ["display(${1:Term})$2\n$0" ], "description":" display(+Term) is det.\n display(+Stream, +Term) is det.\n\n Write a term, ignoring operators.\n\n @deprecated New code must use write_term/3 using the option\n ignore_ops(true).", "prefix":"display" }, "edinburgh:display/2": { "body": ["display(${1:Stream}, ${2:Term})$3\n$0" ], "description":" display(+Term) is det.\n display(+Stream, +Term) is det.\n\n Write a term, ignoring operators.\n\n @deprecated New code must use write_term/3 using the option\n ignore_ops(true).", "prefix":"display" }, "edinburgh:fileerrors/2": { "body": ["fileerrors(${1:Old}, ${2:New})$3\n$0" ], "description":" fileerrors(-Old, +New) is det.\n\n Query and change the fileerrors flag. Default it is set to\n =true=, causing file operations to raise an exception. Setting\n it to =false= activates the old Edinburgh mode of silent\n failure.\n\n @deprecated New code should use catch/3 to handle file errors\n silently", "prefix":"fileerrors" }, "edinburgh:nodebug/0": { "body": ["nodebug$1\n$0" ], "description":" debug is det.\n nodebug is det.\n\n Switch on/off debug mode. Note that nodebug/0 has been defined\n such that is is not traced itself.", "prefix":"nodebug" }, "edinburgh:reconsult/1": { "body": ["reconsult(${1:FileOrList})$2\n$0" ], "description":" reconsult(+FileOrList) is det.\n\n Load source file(s), wiping the old content first. SWI-Prolog's\n consult/1 and related predicates always do this.\n\n @deprecated The Edinburgh Prolog consult/reconsult distinction\n is no longer used throughout most of the Prolog world.", "prefix":"reconsult" }, "edinburgh:unknown/2": { "body": ["unknown(${1:Old}, ${2:New})$3\n$0" ], "description":" unknown(-Old, +New) is det.\n\n Edinburgh Prolog predicate for dealing dealing with undefined\n procedures", "prefix":"unknown" }, "edit/0": { "body":"edit$1\n$0", "description":"edit.\nEdit the `default' file using edit/1. The default file is the file loaded with the command line option -s or, in Windows, the file loaded by double-clicking from the Windows shell.", "prefix":"edit" }, "edit/1": { "body":"edit(${1:Specification})$2\n$0", "description":"edit(+Specification).\nFirst, exploit prolog_edit:locate/3 to translate Specification into a list of Locations. If there is more than one `hit', the user is asked to select from the locations found. Finally, prolog_edit:edit_source/1 is used to invoke the user's preferred editor. Typically, edit/1 can be handed the name of a predicate, module, basename of a file, XPCE class, XPCE method, etc.", "prefix":"edit" }, "editline:el_add_history/2": { "body":"el_add_history(${1:In}, ${2:Line})$3\n$0", "description":"[det]el_add_history(+In:stream, +Line:text).\nAdd a line to the command line history.", "prefix":"el_add_history" }, "editline:el_addfn/4": { "body":"el_addfn(${1:Input}, ${2:Command}, ${3:Help}, ${4:Goal})$5\n$0", "description":"[det]el_addfn(+Input:stream, +Command, +Help, :Goal).\nAdd a new command to the command line editor associated with Input. Command is the name of the command, Help is the help string printed with e.g. bind -a (see el_bind/2) and Goal is called of the associated key-binding is activated. Goal is called as \n\ncall(:Goal, +Input, +Char, -Continue)\n\n where Input is the input stream providing access to the editor, Char the activating character and Continue must be instantated with one of the known continuation codes as defined by libedit: norm, newline, eof, arghack, refresh, refresh_beep, cursor, redisplay, error or fatal. In addition, the following Continue code is provided. \n\nelectric(Move, TimeOut, Continue): Show electric caret at Move positions to the left of the normal cursor positions for the given TimeOut. Continue as defined by the Continue value.\n\n The registered Goal typically used el_line/2 to fetch the input line and el_cursor/2, el_insertstr/2 and/or el_deletestr/2 to manipulate the input line. \n\nNormally el_bind/2 is used to associate the defined command with a keyboard sequence. \n\nSee also: el_set(3) EL_ADDFN for details.\n\n ", "prefix":"el_addfn" }, "editline:el_bind/2": { "body":"el_bind(${1:In}, ${2:Args})$3\n$0", "description":"[det]el_bind(+In:stream, +Args).\nInvoke the libedit bind command with the given arguments. The example below lists the current key bindings. \n\n?- el_bind(user_input, ['-a']).\n\n The predicate el_bind/2 is typically used to bind commands defined using el_addfn/4. Note that the C proxy function has only the last character of the command as context to find the Prolog binding. This implies we cannot both bind e.g., \"^[?\" *and \"?\" to a Prolog function. \n\nSee also: editrc(5) for more information.\n\n ", "prefix":"el_bind" }, "editline:el_cursor/2": { "body":"el_cursor(${1:Input}, ${2:Move})$3\n$0", "description":"[det]el_cursor(+Input:stream, +Move:integer).\nMove the cursor Move character forwards (positive) or backwards (negative).", "prefix":"el_cursor" }, "editline:el_deletestr/2": { "body":"el_deletestr(${1:Input}, ${2:Count})$3\n$0", "description":"[det]el_deletestr(+Input:stream, +Count).\nDelete Count characters before the cursor.", "prefix":"el_deletestr" }, "editline:el_history/2": { "body":"el_history(${1:In}, ${2:Action})$3\n$0", "description":"[det]el_history(+In:stream, ?Action).\nPerform a generic action on the history. This provides an incomplete interface to history() from libedit. Supported actions are: clear: Clear the history.\n\nsetsize(+Integer): Set size of history to size elements.\n\nsetunique(+Boolean): Set flag that adjacent identical event strings should not be entered into the history.\n\n ", "prefix":"el_history" }, "editline:el_history_events/2": { "body":"el_history_events(${1:In}, ${2:Events})$3\n$0", "description":"[det]el_history_events(+In:stream, -Events:list(pair)).\nUnify Events with a list of pairs of the form Num-String, where Num is the event number and String is the associated string without terminating newline.", "prefix":"el_history_events" }, "editline:el_insertstr/2": { "body":"el_insertstr(${1:Input}, ${2:Text})$3\n$0", "description":"[det]el_insertstr(+Input:stream, +Text).\nInsert Text at the cursor.", "prefix":"el_insertstr" }, "editline:el_line/2": { "body":"el_line(${1:Input}, ${2:Line})$3\n$0", "description":"[det]el_line(+Input:stream, -Line).\nFetch the currently buffered input line. Line is a term line(Before, After), where Before is a string holding the text before the cursor and After is a string holding the text after the cursor.", "prefix":"el_line" }, "editline:el_read_history/2": { "body":"el_read_history(${1:In}, ${2:File})$3\n$0", "description":"[det]el_read_history(+In:stream, +File:file).\nRead the history saved using el_write_history/2. File is a file specification for absolute_file_name/3. ", "prefix":"el_read_history" }, "editline:el_setup/1": { "body":"el_setup(${1:In})$2\n$0", "description":"[nondet,multifile]el_setup(+In:stream).\nThis hooks is called as forall(el_setup(Input), true) after the input stream has been wrapped, the default Prolog commands have been added and the default user setup file has been sourced using el_source/2. It can be used to define and bind additional commands.", "prefix":"el_setup" }, "editline:el_source/2": { "body":"el_source(${1:In}, ${2:File})$3\n$0", "description":"[det]el_source(+In:stream, +File).\nInitialise editline by reading the contents of File. If File is unbound try $HOME/.editrc", "prefix":"el_source" }, "editline:el_unwrap/1": { "body":"el_unwrap(${1:In})$2\n$0", "description":"[det]el_unwrap(+In:stream).\nRemove the libedit wrapper for In and the related output and error streams. bug: The wrapper creates FILE* handles that cannot be closed and thus wrapping and unwrapping implies a (modest) memory leak.\n\n ", "prefix":"el_unwrap" }, "editline:el_wrap/4": { "body":"el_wrap(${1:ProgName}, ${2:In}, ${3:Out}, ${4:Error})$5\n$0", "description":"[det]el_wrap(+ProgName:atom, +In:stream, +Out:stream, +Error:stream).\nEnable editline on the stream-triple . From this moment on In is a handle to the command line editor. ProgName is the name of the invoking program, used when reading the editrc(5) file to determine which settings to use. ", "prefix":"el_wrap" }, "editline:el_wrapped/1": { "body":"el_wrapped(${1:In})$2\n$0", "description":"[semidet]el_wrapped(+In:stream).\nTrue if In is a stream wrapped by el_wrap/3.", "prefix":"el_wrapped" }, "editline:el_write_history/2": { "body":"el_write_history(${1:In}, ${2:File})$3\n$0", "description":"[det]el_write_history(+In:stream, +File:file).\nSave editline history to File. The history may be reloaded using el_read_history/2. File is a file specification for absolute_file_name/3. ", "prefix":"el_write_history" }, "emacs_extend:declare_emacs_mode/2": { "body": ["declare_emacs_mode(${1:ModeName}, ${2:FileSpec})$3\n$0" ], "description":" declare_emacs_mode(+ModeName, +FileSpec).\n\n Specifies that PceEmacs mode `ModeName' may be defined by\n (auto)loading `FileSpec'.", "prefix":"declare_emacs_mode" }, "emacs_extend:declare_emacs_mode/3": { "body": [ "declare_emacs_mode(${1:ModeName}, ${2:FileSpec}, ${3:ListOfPatterns})$4\n$0" ], "description":" declare_emacs_mode(+ModeName, +FileSpec, +ListOfPatterns)\n\n Sames as declare_emacs_mode/2. `ListOfPatterns' is a list of\n regular expressions that will automatically start this mode.", "prefix":"declare_emacs_mode" }, "emacs_tags:emacs_complete_tag/3": { "body": ["emacs_complete_tag(${1:Prefix}, ${2:Directory}, ${3:Goal})$4\n$0" ], "description":" emacs_complete_tag(+Prefix, ?Directory, :Goal) is semidet.\n\n Call call(Goal, Symbol) for each symbol that has the given\n Prefix.\n\n @see Used by class emacs_tag_item (defined in this file)", "prefix":"emacs_complete_tag" }, "emacs_tags:emacs_init_tags/1": { "body": ["emacs_init_tags(${1:FileOrDir})$2\n$0" ], "description":" emacs_init_tags(+FileOrDir) is semidet.\n\n Load tags from the given GNU-Emacs TAGS file. If FileOrDir is a\n directory, see whether /TAGS exists.", "prefix":"emacs_init_tags" }, "emacs_tags:emacs_tag/4": { "body": ["emacs_tag(${1:Symbol}, ${2:Dir}, ${3:File}, ${4:Line})$5\n$0" ], "description":" emacs_tag(+Symbol, ?Dir, -File, -Line)\n\n Symbol is defined in File at Line.\n\n @param Dir is the directory in which the tag-file resides,\n represented as a canonical file-name.", "prefix":"emacs_tag" }, "emacs_tags:emacs_tag_file/1": { "body": ["emacs_tag_file(${1:File})$2\n$0" ], "description":" emacs_tag_file(?File) is nondet.\n\n True if File is a loaded Emacs tag-file.", "prefix":"emacs_tag_file" }, "emacs_tags:emacs_update_tags/0": { "body": ["emacs_update_tags$1\n$0" ], "description":" emacs_update_tags is det.\n\n Reload all modified tag-files.", "prefix":"emacs_update_tags" }, "encoding/1": { "body":"encoding(${1:Encoding})$2\n$0", "description":"encoding(+Encoding).\nThis directive can appear anywhere in a source file to define how characters are encoded in the remainder of the file. It can be used in files that are encoded with a superset of US-ASCII, currently UTF-8 and ISO Latin-1. See also section 2.18.1.", "prefix":"encoding" }, "engine_create/3": { "body":"engine_create(${1:Template}, ${2:Goal}, ${3:Engine})$4\n$0", "description":"[det]engine_create(+Template, :Goal, ?Engine).\n", "prefix":"engine_create" }, "engine_create/4": { "body":"engine_create(${1:Template}, ${2:Goal}, ${3:Engine}, ${4:Options})$5\n$0", "description":"[det]engine_create(+Template, :Goal, -Engine, +Options).\nCreate a new engine and unify Engine with a handle to it. Template and Goal form a pair similar to findall/3: the instantiation of Template becomes available though engine_next/2 after Goal succeeds. Options is a list of the following options. See thread_create/3 for details. alias(+Name): Give the engine a name. Name must be an atom. If this option is provided, Engine is unified with Name. The name space for engines is shared with threads and mutexes.\n\nglobal(KBytes): Set the limit for the global stack in KBytes.\n\nlocal(KBytes): Set the limit for the local stack in KBytes.\n\ntrail(KBytes): Set the limit for the trail stack in KBytes.\n\n The Engine argument of engine_create/3 may be instantiated to an atom, creating an engine with the given alias.", "prefix":"engine_create" }, "engine_fetch/1": { "body":"engine_fetch(${1:Term})$2\n$0", "description":"[det]engine_fetch(-Term).\nCalled from within the engine to fetch the term made available through engine_post/2 or engine_post/3. If no term is available an existence_error exception is raised.", "prefix":"engine_fetch" }, "engine_next/2": { "body":"engine_next(${1:Engine}, ${2:Term})$3\n$0", "description":"[semidet]engine_next(+Engine, -Term).\nAsk the engine Engine to produce a next answer. On this first call on a specific engine, the Goal of the engine is started. If a previous call returned an answer through completion, this causes the engine to backtrack and finally, if the engine produces a previous result using engine_yield/1, execution proceeds after the engine_yield/1 call.", "prefix":"engine_next" }, "engine_next_reified/2": { "body":"engine_next_reified(${1:Engine}, ${2:Term})$3\n$0", "description":"[det]engine_next_reified(+Engine, -Term).\nSimilar to engine_next/2, but instead of success, failure or or raising an exception, Term is unified with one of terms below. This predicate is provided primarily for compatibility with Lean Prolog. the(Answer): Goal succeeded with Template bound to Answer or Goal yielded with a term Answer.\n\nno: Goal failed.\n\nexception(Exception): Goal raises the error Exception.\n\n ", "prefix":"engine_next_reified" }, "engine_post/2": { "body":"engine_post(${1:Engine}, ${2:Term})$3\n$0", "description":"[det]engine_post(+Engine, +Term).\nMake Term available to engine_fetch/1 inside the Engine. This call must be followed by a call to engine_next/2 and the engine must call engine_fetch/1.", "prefix":"engine_post" }, "engine_post/3": { "body":"engine_post(${1:Engine}, ${2:Term}, ${3:Reply})$4\n$0", "description":"[det]engine_post(+Engine, +Term, -Reply).\nCombines engine_post/2 and engine_next/2.", "prefix":"engine_post" }, "engine_self/1": { "body":"engine_self(${1:Engine})$2\n$0", "description":"[det]engine_self(-Engine).\nCalled from within the engine to get access to the handle to the engine itself.", "prefix":"engine_self" }, "engine_yield/1": { "body":"engine_yield(${1:Term})$2\n$0", "description":"[det]engine_yield(+Term).\nCalled from within the engine, causing engine_next/2 in the caller to return with Term. A subsequent call to engine_next/2 causes engine_yield/1 to `return'. This predicate can only be called if the engine is not involved in a callback from C, i.e., when the engine calls a predicate defined in C that calls back Prolog it is not possible to use this predicate. Trying to do so results in a permission_error exception.", "prefix":"engine_yield" }, "ensure_loaded/1": { "body":"ensure_loaded(${1:File})$2\n$0", "description":"ensure_loaded(:File).\nIf the file is not already loaded, this is equivalent to consult/1. Otherwise, if the file defines a module, import all public predicates. Finally, if the file is already loaded, is not a module file, and the context module is not the global user module, ensure_loaded/1 will call consult/1. With this semantics, we hope to get as close as possible to the clear semantics without the presence of a module system. Applications using modules should consider using use_module/[1,2]. \n\nEquivalent to load_files(Files, [if(not_loaded)]).44On older versions the condition used to be if(changed). Poor time management on some machines or copying often caused problems. The make/0 predicate deals with updating the running system after changing the source code.\n\n", "prefix":"ensure_loaded" }, "epsilon/0": { "body":"epsilon$1\n$0", "description":"epsilon.\nEvaluate to the difference between the float 1.0 and the first larger floating point number.", "prefix":"epsilon" }, "erase/1": { "body":"erase(${1:Reference})$2\n$0", "description":"erase(+Reference).\nErase a record or clause from the database. Reference is a db-reference returned by recorda/3, recordz/3 or recorded/3, clause/3, assert/2, asserta/2 or assertz/2. Fail silently if the referenced object no longer exists. Notably, if multiple threads attempt to erase the same clause one will succeed and the others will fail.", "prefix":"erase" }, "erf/1": { "body":"erf(${1:Expr})$2\n$0", "description":"erf(+Expr).\nhttp://en.wikipedia.org/wiki/Error_functionWikipediA: ``In mathematics, the error function (also called the Gauss error function) is a special function (non-elementary) of sigmoid shape which occurs in probability, statistics and partial differential equations.''", "prefix":"erf" }, "erfc/1": { "body":"erfc(${1:Expr})$2\n$0", "description":"erfc(+Expr).\nhttp://en.wikipedia.org/wiki/Error_functionWikipediA: ``The complementary error function.''", "prefix":"erfc" }, "error:current_type/3": { "body":"current_type(${1:Type}, ${2:Var}, ${3:Body})$4\n$0", "description":"[nondet]current_type(?Type, @Var, -Body).\nTrue when Type is a currently defined type and Var satisfies Type of the body term Body succeeds.", "prefix":"current_type" }, "error:domain_error/2": { "body":"domain_error(${1:Type}, ${2:Term})$3\n$0", "description":"domain_error(+Type, +Term).\nThe argument is of the proper type, but has a value that is outside the supported values. See type_error/2 for a more elaborate discussion of the distinction between type- and domain-errors.", "prefix":"domain_error" }, "error:existence_error/2": { "body":"existence_error(${1:Type}, ${2:Term})$3\n$0", "description":"existence_error(+Type, +Term).\nTerm is of the correct type and correct domain, but there is no existing (external) resource that is represented by it.", "prefix":"existence_error" }, "error:has_type/2": { "body":"has_type(${1:Type}, ${2:Term})$3\n$0", "description":"[semidet,multifile]has_type(+Type, @Term).\nTrue if Term satisfies Type.", "prefix":"has_type" }, "error:instantiation_error/1": { "body":"instantiation_error(${1:Term})$2\n$0", "description":"instantiation_error(+Term).\nAn argument is under-instantiated. I.e. it is not acceptable as it is, but if some variables are bound to appropriate values it would be acceptable. Term is the term that needs (further) instantiation. Unfortunately, the ISO error does not allow for passing this term along with the error, but we pass it to this predicate for documentation purposes and to allow for future enhancement. ", "prefix":"instantiation_error" }, "error:is_of_type/2": { "body":"is_of_type(${1:Type}, ${2:Term})$3\n$0", "description":"[semidet]is_of_type(+Type, @Term).\nTrue if Term satisfies Type.", "prefix":"is_of_type" }, "error:must_be/2": { "body":"must_be(${1:Type}, ${2:Term})$3\n$0", "description":"[det]must_be(+Type, @Term).\nTrue if Term satisfies the type constraints for Type. Defined types are atom, atomic, between, boolean, callable, chars, codes, text, compound, constant, float, integer, nonneg, positive_integer, negative_integer, nonvar, number, oneof, list, list_or_partial_list, symbol, var, rational, encoding, dict and string. Most of these types are defined by an arity-1 built-in predicate of the same name. Below is a brief definition of the other types.\n\nbooleanone of true or false charAtom of length 1 codeRepresentation Unicode code point charsProper list of 1-character atoms codesProper list of Unicode character codes textOne of atom, string, chars or codes between(IntL,IntU) Integer [IntL..IntU] between(FloatL,FloatU) Number [FloatL..FloatU] nonnegInteger >= 0 positive_integerInteger > 0 negative_integerInteger < 0 oneof(L) Ground term that is member of L encodingValid name for a character encoding cyclicCyclic term (rational tree) acyclicAcyclic term (tree) list(Type) Proper list with elements of Type list_or_partial_listA list or an open list (ending in a variable Note: The Windows version can only represent Unicode code points up to 2^16-1. Higher values cause a representation error on most text handling predicates. \n\nthrows: instantiation_error if Term is insufficiently instantiated and type_error(Type, Term) if Term is not of Type.\n\n ", "prefix":"must_be" }, "error:permission_error/3": { "body":"permission_error(${1:Action}, ${2:Type}, ${3:Term})$4\n$0", "description":"permission_error(+Action, +Type, +Term).\nIt is not allowed to perform Action on the object Term that is of the given Type.", "prefix":"permission_error" }, "error:representation_error/1": { "body":"representation_error(${1:Reason})$2\n$0", "description":"representation_error(+Reason).\nA representation error indicates a limitation of the implementation. SWI-Prolog has no such limits that are not covered by other errors, but an example of a representation error in another Prolog implementation could be an attempt to create a term with an arity higher than supported by the system.", "prefix":"representation_error" }, "error:resource_error/1": { "body":"resource_error(${1:Culprit})$2\n$0", "description":"resource_error(+Culprit).\nA goal cannot be completed due to lack of resources.", "prefix":"resource_error" }, "error:syntax_error/1": { "body":"syntax_error(${1:Culprit})$2\n$0", "description":"syntax_error(+Culprit).\nA text has invalid syntax. The error is described by Culprit. To be done: Deal with proper description of the location of the error. For short texts, we allow for Type(Text), meaning Text is not a valid Type. E.g. syntax_error(number('1a')) means that 1a is not a valid number.\n\n ", "prefix":"syntax_error" }, "error:type_error/2": { "body":"type_error(${1:Type}, ${2:Term})$3\n$0", "description":"type_error(+Type, +Term).\nTell the user that Term is not of the expected Type. This error is closely related to domain_error/2 because the notion of types is not really set in stone in Prolog. We introduce the difference using a simple example. Suppose an argument must be a non-negative integer. If the actual argument is not an integer, this is a type_error. If it is a negative integer, it is a domain_error. \n\nTypical borderline cases are predicates accepting a compound term, e.g., point(X,Y). One could argue that the basic type is a compound-term and any other compound term is a domain error. Most Prolog programmers consider each compound as a type and would consider a compoint that is not point(_,_) a type_error.\n\n", "prefix":"type_error" }, "error:uninstantiation_error/1": { "body":"uninstantiation_error(${1:Term})$2\n$0", "description":"uninstantiation_error(+Term).\nAn argument is over-instantiated. This error is used for output arguments whose value cannot be known upfront. For example, the goal open(File, read, input) cannot succeed because the system will allocate a new unique stream handle that will never unify with input.", "prefix":"uninstantiation_error" }, "eval/1": { "body":"eval(${1:Expr})$2\n$0", "description":"eval(+Expr).\nEvaluate Expr. Although ISO standard dictates that `A=1+2, B is A' works and unifies B to 3, it is widely felt that source level variables in arithmetic expressions should have been limited to numbers. In this view the eval function can be used to evaluate arbitrary expressions.109The eval/1 function was first introduced by ECLiPSe and is under consideration for YAP.", "prefix":"eval" }, "exception/3": { "body":"exception(${1:Exception}, ${2:Context}, ${3:Action})$4\n$0", "description":"exception(+Exception, +Context, -Action).\nDynamic predicate, normally not defined. Called by the Prolog system on run-time exceptions that can be repaired `just-in-time'. The values for Exception are described below. See also catch/3 and throw/1. If this hook predicate succeeds it must instantiate the Action argument to the atom fail to make the operation fail silently, retry to tell Prolog to retry the operation or error to make the system generate an exception. The action retry only makes sense if this hook modified the environment such that the operation can now succeed without error. \n\nundefined_predicate: Context is instantiated to a predicate indicator ([module]:/). If the predicate fails, Prolog will generate an existence_error exception. The hook is intended to implement alternatives to the built-in autoloader, such as autoloading code from a database. Do not use this hook to suppress existence errors on predicates. See also unknown and section 2.13.\n\nundefined_global_variable: Context is instantiated to the name of the missing global variable. The hook must call nb_setval/2 or b_setval/2 before returning with the action retry.\n\n ", "prefix":"exception" }, "exists_directory/1": { "body":"exists_directory(${1:Directory})$2\n$0", "description":"exists_directory(+Directory).\nTrue if Directory exists and is a directory. This does not imply the user has read, search or write permission for the directory.", "prefix":"exists_directory" }, "exists_file/1": { "body":"exists_file(${1:File})$2\n$0", "description":"exists_file(+File).\nTrue if File exists and is a regular file. This does not imply the user has read and/or write permission for the file. This is the same as access_file(File, exist).", "prefix":"exists_file" }, "exists_source/1": { "body":"exists_source(${1:Spec})$2\n$0", "description":"exists_source(+Spec).\nIs true if Spec exists as a Prolog source. Spec uses the same conventions as load_files/2. Fails without error if Spec cannot be found.", "prefix":"exists_source" }, "exp/1": { "body":"exp(${1:Expr})$2\n$0", "description":"[ISO]exp(+Expr).\nResult = e **Expr", "prefix":"exp" }, "expand_answer/2": { "body":"expand_answer(${1:Bindings}, ${2:ExpandedBindings})$3\n$0", "description":"expand_answer(+Bindings, -ExpandedBindings).\nHook in module user, normally not defined. Expand the result of a successfully executed top-level query. Bindings is the query = binding list from the query. ExpandedBindings must be unified with the bindings the top level should print.", "prefix":"expand_answer" }, "expand_file_name/2": { "body":"expand_file_name(${1:WildCard}, ${2:List})$3\n$0", "description":"expand_file_name(+WildCard, -List).\nUnify List with a sorted list of files or directories matching WildCard. The normal Unix wildcard constructs `?', `*', `[ ... ]' and `{...}' are recognised. The interpretation of `{...}' is slightly different from the C shell (csh(1)). The comma-separated argument can be arbitrary patterns, including `{...}' patterns. The empty pattern is legal as well: `{.pl,}' matches either `.pl' or the empty string. If the pattern contains wildcard characters, only existing files and directories are returned. Expanding a `pattern' without wildcard characters returns the argument, regardless of whether or not it exists. \n\nBefore expanding wildcards, the construct $var is expanded to the value of the environment variable var, and a possible leading ~ character is expanded to the user's home directory.130On Windows, the home directory is determined as follows: if the environment variable HOME exists, this is used. If the variables HOMEDRIVE and HOMEPATH exist (Windows-NT), these are used. At initialisation, the system will set the environment variable HOME to point to the SWI-Prolog home directory if neither HOME nor HOMEPATH and HOMEDRIVE are defined.\n\n", "prefix":"expand_file_name" }, "expand_file_search_path/2": { "body":"expand_file_search_path(${1:Spec}, ${2:Path})$3\n$0", "description":"[nondet]expand_file_search_path(+Spec, -Path).\nUnifies Path with all possible expansions of the filename specification Spec. See also absolute_file_name/3.", "prefix":"expand_file_search_path" }, "expand_goal/2": { "body":"expand_goal(${1:Goal1}, ${2:Goal2})$3\n$0", "description":"expand_goal(+Goal1, -Goal2).\nThis predicate is normally called by the compiler to perform preprocessing using goal_expansion/2. The predicate computes a fixed-point by applying transformations until there are no more changes. If optimisation is enabled (see -O and optimise), expand_goal/2 simplifies the result by removing unneeded calls to true/0 and fail/0 as well as unreachable branches.", "prefix":"expand_goal" }, "expand_goal/4": { "body":"expand_goal(${1:Goal1}, ${2:Layout1}, ${3:Goal2}, ${4:Layout2})$5\n$0", "description":"expand_goal(+Goal1, ?Layout1, -Goal2, -Layout2).\n", "prefix":"expand_goal" }, "expand_query/4": { "body":"expand_query(${1:Query}, ${2:Expanded}, ${3:Bindings}, ${4:ExpandedBindings})$5\n$0", "description":"expand_query(+Query, -Expanded, +Bindings, -ExpandedBindings).\nHook in module user, normally not defined. Query and Bindings represents the query read from the user and the names of the free variables as obtained using read_term/3. If this predicate succeeds, it should bind Expanded and ExpandedBindings to the query and bindings to be executed by the top level. This predicate is used by the top level (prolog/0). See also expand_answer/2 and term_expansion/2.", "prefix":"expand_query" }, "expand_term/2": { "body":"expand_term(${1:Term1}, ${2:Term2})$3\n$0", "description":"expand_term(+Term1, -Term2).\nThis predicate is normally called by the compiler on terms read from the input to perform preprocessing. It consists of four steps, where each step processes the output of the previous step. \n\nTest conditional compilation directives and translate all input to [] if we are in a `false branch' of the conditional compilation. See section 4.3.1.2. \nCall term_expansion/2. This predicate is first tried in the module that is being compiled and then in the module user. \nCall DCG expansion (dcg_translate_rule/2). \nCall expand_goal/2 on each body term that appears in the output of the previous steps.\n\n", "prefix":"expand_term" }, "expand_term/4": { "body":"expand_term(${1:Term1}, ${2:Layout1}, ${3:Term2}, ${4:Layout2})$5\n$0", "description":"expand_term(+Term1, ?Layout1, -Term2, -Layout2).\n", "prefix":"expand_term" }, "explain/1": { "body":"explain(${1:ToExplain})$2\n$0", "description":"explain(+ToExplain).\nGive an explanation on the given `object'. The argument may be any Prolog data object. If the argument is an atom, a term of the form Name/Arity or a term of the form Module:Name/Arity, explain/1 describes the predicate as well as possible references to it. See also gxref/0.", "prefix":"explain" }, "explain/2": { "body":"explain(${1:ToExplain}, ${2:Explanation})$3\n$0", "description":"explain(+ToExplain, -Explanation).\nUnify Explanation with an explanation for ToExplain. Backtracking yields further explanations.", "prefix":"explain" }, "export/2": { "body":"export(${1:PredicateIndicator}, ${2:...})$3\n$0", "description":"export(+PredicateIndicator, ...).\nAdd predicates to the public list of the context module. This implies the predicate will be imported into another module if this module is imported with use_module/[1,2]. Note that predicates are normally exported using the directive module/2. export/1 is meant to handle export from dynamically created modules.", "prefix":"export" }, "fast_read/2": { "body":"fast_read(${1:Input}, ${2:Term})$3\n$0", "description":"fast_read(+Input, -Term).\nRead Term using the fast serialization format from the Input stream. Input must be a binary stream.bugThe predicate fast_read/2 may crash on arbitrary input.", "prefix":"fast_read" }, "fast_term_serialized/2": { "body":"fast_term_serialized(${1:Term}, ${2:String})$3\n$0", "description":"fast_term_serialized(?Term, ?String).\n(De-)serialize Term to/from String.", "prefix":"fast_term_serialized" }, "fast_write/2": { "body":"fast_write(${1:Output}, ${2:Term})$3\n$0", "description":"fast_write(+Output, +Term).\nWrite Term using the fast serialization format to the Output stream. Output must be a binary stream.", "prefix":"fast_write" }, "fastrw:fast_read/1": { "body": ["fast_read(${1:Term})$2\n$0" ], "description":" fast_read(-Term)\n\n The next term is read from current standard input and is unified\n with Term. The syntax of the term must agree with fast_read /\n fast_write format. If the end of the input has been reached,\n Term is unified with the term =end_of_file=.", "prefix":"fast_read" }, "fastrw:fast_read/2": { "body": ["fast_read(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"fast_read('Param1','Param2')", "prefix":"fast_read" }, "fastrw:fast_write/1": { "body": ["fast_write(${1:Term})$2\n$0" ], "description":" fast_write(+Term)\n\n Output Term in a way that fast_read/1 and fast_read/2 will be\n able to read it back.", "prefix":"fast_write" }, "fastrw:fast_write/2": { "body": ["fast_write(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"fast_write('Param1','Param2')", "prefix":"fast_write" }, "fastrw:fast_write_to_string/3": { "body": ["fast_write_to_string(${1:Term}, ${2:String}, ${3:Tail})$4\n$0" ], "description":" fast_write_to_string(+Term, -String, ?Tail)\n\n Perform a fast-write to the difference-slist String\\Tail.", "prefix":"fast_write_to_string" }, "file_base_name/2": { "body":"file_base_name(${1:File}, ${2:BaseName})$3\n$0", "description":"file_base_name(+File, -BaseName).\nExtracts the filename part from a path specification. If File does not contain any directory separators, File is returned in BaseName. See also file_directory_name/2. If the File arguments ends with a /, e.g., '/hello/', BaseName is unified with the empty atom ('').", "prefix":"file_base_name" }, "file_directory_name/2": { "body":"file_directory_name(${1:File}, ${2:Directory})$3\n$0", "description":"file_directory_name(+File, -Directory).\nExtracts the directory part of File. The returned Directory name does not end in /. There are two special cases. The directory name of / is / itself, and the directory name is . if File does not contain any / characters. If the File argument ends with a /, e.g., '/hello/', it is not a valid file name. In this case the final / is removed from File, e.g., '/hello'. See also directory_file_path/3 from library(filesex). The system ensures that for every valid Path using the Prolog (POSIX) directory separators, following is true on systems with a sound implementation of same_file/2.127On some systems, Path and Path2 refer to the same entry in the file system, but same_file/2 may fail. See also prolog_to_os_filename/2. \n\n\n\n ...,\n file_directory_name(Path, Dir),\n file_base_name(Path, File),\n directory_file_path(Dir, File, Path2),\n same_file(Path, Path2).\n\n ", "prefix":"file_directory_name" }, "file_name_extension/3": { "body":"file_name_extension(${1:Base}, ${2:Extension}, ${3:Name})$4\n$0", "description":"file_name_extension(?Base, ?Extension, ?Name).\nThis predicate is used to add, remove or test filename extensions. The main reason for its introduction is to deal with different filename properties in a portable manner. If the file system is case-insensitive, testing for an extension will also be done case-insensitive. Extension may be specified with or without a leading dot (.). If an Extension is generated, it will not have a leading dot.", "prefix":"file_name_extension" }, "file_search_path/2": { "body":"file_search_path(${1:Alias}, ${2:Path})$3\n$0", "description":"file_search_path(+Alias, -Path).\nDynamic multifile hook predicate used to specify `path aliases'. This hook is called by absolute_file_name/3 to search files specified as Alias(Name), e.g., library(lists). This feature is best described using an example. Given the definition: \n\nfile_search_path(demo, '/usr/lib/prolog/demo').\n\n the file specification demo(myfile) will be expanded to /usr/lib/prolog/demo/myfile. The second argument of file_search_path/2 may be another alias. \n\nBelow is the initial definition of the file search path. This path implies swi() and refers to a file in the SWI-Prolog home directory. The alias foreign() is intended for storing shared libraries (.so or .DLL files). See also use_foreign_library/1. \n\n\n\nuser:file_search_path(library, X) :-\n library_directory(X).\nuser:file_search_path(swi, Home) :-\n current_prolog_flag(home, Home).\nuser:file_search_path(foreign, swi(ArchLib)) :-\n current_prolog_flag(arch, Arch),\n atom_concat('lib/', Arch, ArchLib).\nuser:file_search_path(foreign, swi(lib)).\nuser:file_search_path(path, Dir) :-\n getenv('PATH', Path),\n ( current_prolog_flag(windows, true)\n -> atomic_list_concat(Dirs, (;), Path)\n ; atomic_list_concat(Dirs, :, Path)\n ),\n member(Dir, Dirs).\n\n The file_search_path/2 expansion is used by all loading predicates as well as by absolute_file_name/[2,3]. \n\nThe Prolog flag verbose_file_search can be set to true to help debugging Prolog's search for files.\n\n", "prefix":"file_search_path" }, "files:can_open_file/2": { "body": ["can_open_file(${1:Path}, ${2:Mode})$3\n$0" ], "description":" can_open_file(+Path, +Mode)\n\n Succeeds if the user has access to `File' in mode `Mode'. Fails\n silently if this is not the case. `Mode' is one of {read,\n write, both}. This used to be difficult. Since we have\n access_file/2 it is merely a Quintus compatibility predicate\n and should be in quintus.pl. We will leave it here for compatibility\n reasons.\n\n @deprecated Use access_file/2.", "prefix":"can_open_file" }, "files:chdir/1": { "body": ["chdir(${1:Dir})$2\n$0" ], "description":" chdir(+Dir) is det.\n\n Change Working Directory.\n\n @deprecated Use using working_directory/2.", "prefix":"chdir" }, "files_ex:copy_directory/2": { "body": ["copy_directory(${1:From}, ${2:To})$3\n$0" ], "description":" copy_directory(+From, +To) is det.\n\n Copy the contents of the directory From to To (recursively). If\n To is the name of an existing directory, the _contents_ of From\n are copied into To. I.e., no subdirectory using the basename of\n From is created.", "prefix":"copy_directory" }, "files_ex:copy_file/2": { "body": ["copy_file(${1:From}, ${2:To})$3\n$0" ], "description":" copy_file(From, To) is det.\n\n Copy a file into a new file or directory. The data is copied as\n binary data.", "prefix":"copy_file" }, "files_ex:delete_directory_and_contents/1": { "body": ["delete_directory_and_contents(${1:Dir})$2\n$0" ], "description":" delete_directory_and_contents(+Dir) is det.\n\n Recursively remove the directory Dir and its contents. If Dir is\n a symbolic link or symbolic links inside Dir are encountered,\n the links are removed rather than their content. Use with care!", "prefix":"delete_directory_and_contents" }, "files_ex:delete_directory_contents/1": { "body": ["delete_directory_contents(${1:Dir})$2\n$0" ], "description":" delete_directory_contents(+Dir) is det.\n\n Remove all content from directory Dir, without removing Dir\n itself. Similar to delete_directory_and_contents/2, if symbolic\n links are encountered in Dir, the links are removed rather than\n their content.", "prefix":"delete_directory_contents" }, "files_ex:directory_file_path/3": { "body": ["directory_file_path(${1:Directory}, ${2:File}, ${3:Path})$4\n$0" ], "description":" directory_file_path(+Directory, +File, -Path) is det.\n directory_file_path(?Directory, ?File, +Path) is det.\n\n True when Path is the full path-name for File in Dir. This is\n comparable to atom_concat(Directory, File, Path), but it ensures\n there is exactly one / between the two parts. Notes:\n\n * In mode (+,+,-), if File is given and absolute, Path\n is unified to File.\n * Mode (-,-,+) uses file_directory_name/2 and file_base_name/2", "prefix":"directory_file_path" }, "files_ex:link_file/3": { "body": ["link_file(${1:OldPath}, ${2:NewPath}, ${3:Type})$4\n$0" ], "description":" link_file(+OldPath, +NewPath, +Type) is det.\n\n Create a link in the filesystem from NewPath to OldPath. Type\n defines the type of link and is one of =hard= or =symbolic=.\n\n With some limitations, these functions also work on Windows.\n First of all, the unerlying filesystem must support links. This\n requires NTFS. Second, symbolic links are only supported in\n Vista and later.\n\n @error domain_error(link_type, Type) if the requested link-type\n is unknown or not supported on the target OS.", "prefix":"link_file" }, "files_ex:make_directory_path/1": { "body": ["make_directory_path(${1:Dir})$2\n$0" ], "description":" make_directory_path(+Dir) is det.\n\n Create Dir and all required components (like mkdir -p). Can\n raise various file-specific exceptions.", "prefix":"make_directory_path" }, "files_ex:relative_file_name/3": { "body": ["relative_file_name(${1:Path}, ${2:RelTo}, ${3:RelPath})$4\n$0" ], "description":" relative_file_name(+Path:atom, +RelTo:atom, -RelPath:atom) is det.\n relative_file_name(-Path:atom, +RelTo:atom, +RelPath:atom) is det.\n\n True when RelPath is Path, relative to RelTo. Path and RelTo are\n first handed to absolute_file_name/2, which makes the absolute\n *and* canonical. Below are two examples:\n\n ==\n ?- relative_file_name('/home/janw/nice',\n '/home/janw/deep/dir/file', Path).\n Path = '../../nice'.\n\n ?- relative_file_name(Path, '/home/janw/deep/dir/file', '../../nice').\n Path = '/home/janw/nice'.\n ==\n\n @param All paths must be in canonical POSIX notation, i.e.,\n using / to separate segments in the path. See\n prolog_to_os_filename/2.\n @bug This predicate is defined as a _syntactical_ operation.", "prefix":"relative_file_name" }, "files_ex:set_time_file/3": { "body": ["set_time_file(${1:File}, ${2:OldTimes}, ${3:NewTimes})$4\n$0" ], "description":" set_time_file(+File, -OldTimes, +NewTimes) is det.\n\n Query and set POSIX time attributes of a file. Both OldTimes and\n NewTimes are lists of option-terms. Times are represented in\n SWI-Prolog's standard floating point numbers. New times may be\n specified as =now= to indicate the current time. Defined options\n are:\n\n * access(Time)\n Describes the time of last access of the file. This value\n can be read and written.\n\n * modified(Time)\n Describes the time the contents of the file was last\n modified. This value can be read and written.\n\n * changed(Time)\n Describes the time the file-structure itself was changed by\n adding (link()) or removing (unlink()) names.\n\n Below are some example queries. The first retrieves the\n access-time, while the second sets the last-modified time to the\n current time.\n\n ==\n ?- set_time_file(foo, [access(Access)], []).\n ?- set_time_file(foo, [], [modified(now)]).\n ==", "prefix":"set_time_file" }, "filesex:copy_directory/2": { "body":"copy_directory(${1:From}, ${2:To})$3\n$0", "description":"[det]copy_directory(+From, +To).\nCopy the contents of the directory From to To (recursively). If To is the name of an existing directory, the contents of From are copied into To. I.e., no subdirectory using the basename of From is created.", "prefix":"copy_directory" }, "filesex:copy_file/2": { "body":"copy_file(${1:From}, ${2:To})$3\n$0", "description":"[det]copy_file(From, To).\nCopy a file into a new file or directory. The data is copied as binary data.", "prefix":"copy_file" }, "filesex:delete_directory_and_contents/1": { "body":"delete_directory_and_contents(${1:Dir})$2\n$0", "description":"[det]delete_directory_and_contents(+Dir).\nRecursively remove the directory Dir and its contents. If Dir is a symbolic link or symbolic links inside Dir are encountered, the links are removed rather than their content. Use with care!", "prefix":"delete_directory_and_contents" }, "filesex:delete_directory_contents/1": { "body":"delete_directory_contents(${1:Dir})$2\n$0", "description":"[det]delete_directory_contents(+Dir).\nRemove all content from directory Dir, without removing Dir itself. Similar to delete_directory_and_contents/2, if symbolic links are encountered in Dir, the links are removed rather than their content.", "prefix":"delete_directory_contents" }, "filesex:directory_file_path/3": { "body":"directory_file_path(${1:Directory}, ${2:File}, ${3:Path})$4\n$0", "description":"[det]directory_file_path(?Directory, ?File, +Path).\nTrue when Path is the full path-name for File in Dir. This is comparable to atom_concat(Directory, File, Path), but it ensures there is exactly one / between the two parts. Notes: \n\nIn mode (+,+,-), if File is given and absolute, Path is unified to File.\nMode (-,-,+) uses file_directory_name/2 and file_base_name/2\n\n", "prefix":"directory_file_path" }, "filesex:link_file/3": { "body":"link_file(${1:OldPath}, ${2:NewPath}, ${3:Type})$4\n$0", "description":"[det]link_file(+OldPath, +NewPath, +Type).\nCreate a link in the filesystem from NewPath to OldPath. Type defines the type of link and is one of hard or symbolic. With some limitations, these functions also work on Windows. First of all, the unerlying filesystem must support links. This requires NTFS. Second, symbolic links are only supported in Vista and later. \n\nErrors: domain_error(link_type, Type) if the requested link-type is unknown or not supported on the target OS.\n\n ", "prefix":"link_file" }, "filesex:make_directory_path/1": { "body":"make_directory_path(${1:Dir})$2\n$0", "description":"[det]make_directory_path(+Dir).\nCreate Dir and all required components (like mkdir -p). Can raise various file-specific exceptions.", "prefix":"make_directory_path" }, "filesex:process_group_kill/2": { "body":"process_group_kill(${1:PID}, ${2:Signal})$3\n$0", "description":"[det]process_group_kill(+PID, +Signal).\nSend signal to the group containing process PID. Default is term. See process_wait/1 for a description of signal handling. In Windows, the same restriction on PID applies: it must have been created from process_create/3, and the the group is terminated via the TerminateJobObject API.", "prefix":"process_group_kill" }, "filesex:relative_file_name/3": { "body":"relative_file_name(${1:Path}, ${2:RelTo}, ${3:RelPath})$4\n$0", "description":"[det]relative_file_name(-Path:atom, +RelTo:atom, +RelPath:atom).\nTrue when RelPath is Path, relative to RelTo. Path and RelTo are first handed to absolute_file_name/2, which makes the absolute and canonical. Below are two examples: \n\n?- relative_file_name('/home/janw/nice',\n '/home/janw/deep/dir/file', Path).\nPath = '../../nice'.\n\n?- relative_file_name(Path, '/home/janw/deep/dir/file', '../../nice').\nPath = '/home/janw/nice'.\n\n All paths must be in canonical POSIX notation, i.e., using / to separate segments in the path. See prolog_to_os_filename/2. bug: This predicate is defined as a syntactical operation.\n\n ", "prefix":"relative_file_name" }, "filesex:set_time_file/3": { "body":"set_time_file(${1:File}, ${2:OldTimes}, ${3:NewTimes})$4\n$0", "description":"[det]set_time_file(+File, -OldTimes, +NewTimes).\nQuery and set POSIX time attributes of a file. Both OldTimes and NewTimes are lists of option-terms. Times are represented in SWI-Prolog's standard floating point numbers. New times may be specified as now to indicate the current time. Defined options are: access(Time): Describes the time of last access of the file. This value can be read and written.\n\nmodified(Time): Describes the time the contents of the file was last modified. This value can be read and written.\n\nchanged(Time): Describes the time the file-structure itself was changed by adding (link()) or removing (unlink()) names.\n\n Below are some example queries. The first retrieves the access-time, while the second sets the last-modified time to the current time. \n\n\n\n?- set_time_file(foo, [access(Access)], []).\n?- set_time_file(foo, [], [modified(now)]).\n\n ", "prefix":"set_time_file" }, "fill_buffer/1": { "body":"fill_buffer(${1:Stream})$2\n$0", "description":"[det]fill_buffer(+Stream).\nFill the Stream's input buffer. Subsequent calls try to read more input until the buffer is completely filled. This predicate is used together with read_pending_codes/3 to process input with minimal buffering.", "prefix":"fill_buffer" }, "find_chr_constraint/1": { "body":"find_chr_constraint(${1:Constraint})$2\n$0", "description":"find_chr_constraint(-Constraint).\nReturns a constraint in the constraint store. Via backtracking, all constraints in the store can be enumerated.", "prefix":"find_chr_constraint" }, "findall/3": { "body":"findall(${1:Template}, ${2:Goal}, ${3:Bag})$4\n$0", "description":"[ISO]findall(+Template, :Goal, -Bag).\nCreate a list of the instantiations Template gets successively on backtracking over Goal and unify the result with Bag. Succeeds with an empty list if Goal has no solutions. findall/3 is equivalent to bagof/3 with all free variables bound with the existential operator (^), except that bagof/3 fails when Goal has no solutions.", "prefix":"findall" }, "findall/4": { "body":"findall(${1:Template}, ${2:Goal}, ${3:Bag}, ${4:Tail})$5\n$0", "description":"findall(+Template, :Goal, -Bag, +Tail).\nAs findall/3, but returns the result as the difference list Bag-Tail. The 3-argument version is defined as \n\nfindall(Templ, Goal, Bag) :-\n findall(Templ, Goal, Bag, [])\n\n ", "prefix":"findall" }, "findnsols/4": { "body":"findnsols(${1:N}, ${2:Template}, ${3:Goal}, ${4:List})$5\n$0", "description":"[nondet]findnsols(+N, @Template, :Goal, -List).\n", "prefix":"findnsols" }, "findnsols/5": { "body":"findnsols(${1:N}, ${2:Template}, ${3:Goal}, ${4:List}, ${5:Tail})$6\n$0", "description":"[nondet]findnsols(+N, @Template, :Goal, -List, ?Tail).\nAs findall/3 and findall/4, but generates at most N solutions. If N solutions are returned, this predicate succeeds with a choice point if Goal has a choice point. Backtracking returns the next chunk of (at most) N solutions. In addition to passing a plain integer for N, a term of the form count(N) is accepted. Using count(N), the size of the next chunk can be controlled using nb_setarg/3. The non-deterministic behaviour used to implement the chunk option in library(pengines). Based on Ciao, but the Ciao version is deterministic. Portability can be achieved by wrapping the goal in once/1. Below are three examples. The first illustrates standard chunking of answers. The second illustrates that the chunk size can be adjusted dynamically and the last illustrates that no choice point is left if Goal leaves no choice-point after the last solution. \n\n?- findnsols(5, I, between(1, 12, I), L).\nL = [1, 2, 3, 4, 5] ;\nL = [6, 7, 8, 9, 10] ;\nL = [11, 12].\n\n?- State = count(2),\n findnsols(State, I, between(1, 12, I), L),\n nb_setarg(1, State, 5).\nState = count(5), L = [1, 2] ;\nState = count(5), L = [3, 4, 5, 6, 7] ;\nState = count(5), L = [8, 9, 10, 11, 12].\n\n?- findnsols(4, I, between(1, 4, I), L).\nL = [1, 2, 3, 4].\n\n ", "prefix":"findnsols" }, "flag/3": { "body":"flag(${1:Key}, ${2:Old}, ${3:New})$4\n$0", "description":"flag(+Key, -Old, +New).\nTrue when Old is the current value of the flag Key and the flag has been set to New. New can be an arithmetic expression. The update is atomic. This predicate can be used to create a shared global counter as illustrated in the example below. \n\nnext_id(Id) :-\n flag(my_id, Id, Id+1).\n\n \n\n", "prefix":"flag" }, "float/1": { "body":"float(${1:Term})$2\n$0", "description":"[ISO]float(@Term).\nTrue if Term is bound to a floating point number.", "prefix":"float" }, "float_fractional_part/1": { "body":"float_fractional_part(${1:Expr})$2\n$0", "description":"[ISO]float_fractional_part(+Expr).\nFractional part of a floating point number. Negative if Expr is negative, rational if Expr is rational and 0 if Expr is integer. The following relation is always true: X is float_fractional_part(X) + float_integer_part(X).", "prefix":"float_fractional_part" }, "float_integer_part/1": { "body":"float_integer_part(${1:Expr})$2\n$0", "description":"[ISO]float_integer_part(+Expr).\nInteger part of floating point number. Negative if Expr is negative, Expr if Expr is integer.", "prefix":"float_integer_part" }, "floor/1": { "body":"floor(${1:Expr})$2\n$0", "description":"[ISO]floor(+Expr).\nEvaluate Expr and return the largest integer smaller or equal to the result of the evaluation.", "prefix":"floor" }, "flush_output/1": { "body":"flush_output(${1:Stream})$2\n$0", "description":"[ISO]flush_output(+Stream).\nFlush output on the specified stream. The stream must be open for writing.", "prefix":"flush_output" }, "forall/2": { "body":"forall(${1:Cond}, ${2:Action})$3\n$0", "description":"[semidet]forall(:Cond, :Action).\nFor all alternative bindings of Cond, Action can be proven. The example verifies that all arithmetic statements in the given list are correct. It does not say which is wrong if one proves wrong. \n\n?- forall(member(Result = Formula, [2 = 1 + 1, 4 = 2 * 2]),\n Result =:= Formula).\n\n The predicate forall/2 is implemented as \\+ ( Cond, \\+ Action), i.e., There is no instantiation of Cond for which Action is false.. The use of double negation implies that forall/2 does not change any variable bindings. It proves a relation. The forall/2 control structure can be used for its side-effects. E.g., the following asserts relations in a list into the dynamic database: \n\n\n\n?- forall(member(Child-Parent, ChildPairs),\n assertz(child_of(Child, Parent))).\n\n Using forall/2 as forall(Generator, SideEffect) is preferred over the classical failure driven loop as shown below because it makes it explicit which part of the construct is the generator and which part creates the side effects. Also, unexpected failure of the side effect causes the construct to fail. Failure makes it evident that there is an issue with the code, while a failure driven loop would succeed with an erroneous result. \n\n\n\n ...,\n ( Generator,\n SideEffect,\n fail\n ; true\n )\n\n If your intent is to create variable bindings, the forall/2 control structure is inadequate. Possibly you are looking for maplist/2, findall/3 or foreach/2.\n\n", "prefix":"forall" }, "format/1": { "body":"format(${1:Format})$2\n$0", "description":"format(+Format).\nDefined as `format(Format) :- format(Format, []).'. See format/2 for details.", "prefix":"format" }, "format/2": { "body":"format(${1:Format}, ${2:Arguments})$3\n$0", "description":"format(+Format, :Arguments).\nFormat is an atom, list of character codes, or a Prolog string. Arguments provides the arguments required by the format specification. If only one argument is required and this single argument is not a list, the argument need not be put in a list. Otherwise the arguments are put in a list. Special sequences start with the tilde (~), followed by an optional numeric argument, optionally followed by a colon modifier (:), 121The colon modifiers is a SWI-Prolog extension, proposed by Richard O'Keefe. followed by a character describing the action to be undertaken. A numeric argument is either a sequence of digits, representing a positive decimal number, a sequence `, representing the character code value of the character (only useful for ~t) or a asterisk (*), in which case the numeric argument is taken from the next argument of the argument list, which should be a positive integer. E.g., the following three examples all pass 46 (.) to ~t: \n\n\n\n?- format('~w ~46t ~w~72|~n', ['Title', 'Page']).\n?- format('~w ~`.t ~w~72|~n', ['Title', 'Page']).\n?- format('~w ~*t ~w~72|~n', ['Title', 46, 'Page']).\n\n Numeric conversion (d, D, e, E, f, g and G) accept an arithmetic expression as argument. This is introduced to handle rational numbers transparently (see section 4.27.2.2). The floating point conversions allow for unlimited precision for printing rational numbers in decimal form. E.g., the following will write as many 3's as you want by changing the `50'. \n\n\n\n?- format('~50f', [10 rdiv 3]).\n3.33333333333333333333333333333333333333333333333333\n\n \n\n~ Output the tilde itself. \na Output the next argument, which must be an atom. This option is equivalent to w, except that it requires the argument to be an atom. \nc Interpret the next argument as a character code and add it to the output. This argument must be a valid Unicode character code. Note that the actually emitted bytes are defined by the character encoding of the output stream and an exception may be raised if the output stream is not capable of representing the requested Unicode character. See section 2.18.1 for details. \nd Output next argument as a decimal number. It should be an integer. If a numeric argument is specified, a dot is inserted argument positions from the right (useful for doing fixed point arithmetic with integers, such as handling amounts of money). The colon modifier (e.g., ~:d) causes the number to be printed according to the locale of the output stream. See section 4.23. \nD Same as d, but makes large values easier to read by inserting a comma every three digits left or right of the dot. This is the same as ~:d, but using the fixed English locale. \ne Output next argument as a floating point number in exponential notation. The numeric argument specifies the precision. Default is 6 digits. Exact representation depends on the C library function printf(). This function is invoked with the format %.e. \nE Equivalent to e, but outputs a capital E to indicate the exponent. \nf Floating point in non-exponential notation. The numeric argument defines the number of digits right of the decimal point. If the colon modifier (:) is used, the float is formatted using conventions from the current locale, which may define the decimal point as well as grouping of digits left of the decimal point. \ng Floating point in e or f notation, whichever is shorter. \nG Floating point in E or f notation, whichever is shorter. \ni Ignore next argument of the argument list. Produces no output. \nI Emit a decimal number using Prolog digit grouping (the underscore, _). The argument describes the size of each digit group. The default is 3. See also section 2.15.1.5. For example: ?- A is 1<<100, format('~10I', [A]). 1_2676506002_2822940149_6703205376 \nk Give the next argument to write_canonical/1.\nn Output a newline character.\nN Only output a newline if the last character output on this stream was not a newline. Not properly implemented yet.\np Give the next argument to print/1.\nq Give the next argument to writeq/1. \nr Print integer in radix numeric argument notation. Thus ~16r prints its argument hexadecimal. The argument should be in the range [2, ... , 36]. Lowercase letters are used for digits above 9. The colon modifier may be used to form locale-specific digit groups. \nR Same as r, but uses uppercase letters for digits above 9.\ns Output text from a list of character codes or a string (see string/1 and section 5.2) from the next argument.122The s modifier also accepts an atom for compatibility. This is deprecated due to the ambiguity of [].\n@ Interpret the next argument as a goal and execute it. Output written to the current_output stream is inserted at this place. Goal is called in the module calling format/3. This option is not present in the original definition by Quintus, but supported by some other Prolog systems.\nt All remaining space between 2 tab stops is distributed equally over ~t statements between the tab stops. This space is padded with spaces by default. If an argument is supplied, it is taken to be the character code of the character used for padding. This can be used to do left or right alignment, centering, distributing, etc. See also ~| and ~+ to set tab stops. A tab stop is assumed at the start of each line.\n| Set a tab stop on the current position. If an argument is supplied set a tab stop on the position of that argument. This will cause all ~t's to be distributed between the previous and this tab stop. \n+ Set a tab stop (as ~|) relative to the last tab stop or the beginning of the line if no tab stops are set before the ~+. This constructs can be used to fill fields. The partial format sequence below prints an integer right-aligned and padded with zeros in 6 columns. The ... sequences in the example illustrate that the integer is aligned in 6 columns regardless of the remainder of the format specification. format('...~|~`0t~d~6+...', [..., Integer, ...]) \nw Give the next argument to write/1.\nW Give the next two arguments to write_term/2. For example, format('~W', [Term, [numbervars(true)]]). This option is SWI-Prolog specific.\n\n Example: \n\n\n\nsimple_statistics :-\n % left to the user\n format('~tStatistics~t~72|~n~n'),\n format('Runtime: ~`.t ~2f~34| Inferences: ~`.t ~D~72|~n',\n [RunT, Inf]),\n ....\n\n will output \n\n\n\n Statistics\n\nRuntime: .................. 3.45 Inferences: .......... 60,345\n\n ", "prefix":"format" }, "format/3": { "body":"format(${1:Output}, ${2:Format}, ${3:Arguments})$4\n$0", "description":"format(+Output, +Format, :Arguments).\nAs format/2, but write the output on the given Output. The de-facto standard only allows Output to be a stream. The SWI-Prolog implementation allows all valid arguments for with_output_to/2.123Earlier versions defined sformat/3 . These predicates have been moved to the library library(backcomp). For example: \n\n?- format(atom(A), '~D', [1000000]).\nA = '1,000,000'\n\n \n\n", "prefix":"format" }, "format_predicate/2": { "body":"format_predicate(${1:Char}, ${2:Head})$3\n$0", "description":"format_predicate(+Char, +Head).\nIf a sequence ~c (tilde, followed by some character) is found, the format/3 and friends first check whether the user has defined a predicate to handle the format. If not, the built-in formatting rules described above are used. Char is either a character code or a one-character atom, specifying the letter to be (re)defined. Head is a term, whose name and arity are used to determine the predicate to call for the redefined formatting character. The first argument to the predicate is the numeric argument of the format command, or the atom default if no argument is specified. The remaining arguments are filled from the argument list. The example below defines ~T to print a timestamp in ISO8601 format (see format_time/3). The subsequent block illustrates a possible call. \n\n:- format_predicate('T', format_time(_Arg,_Time)).\n\nformat_time(_Arg, Stamp) :-\n must_be(number, Stamp),\n format_time(current_output, '%FT%T%z', Stamp).\n\n \n\n?- get_time(Now),\n format('Now, it is ~T~n', [Now]).\nNow, it is 2012-06-04T19:02:01+0200\nNow = 1338829321.6620328.\n\n ", "prefix":"format_predicate" }, "format_time/3": { "body":"format_time(${1:Out}, ${2:Format}, ${3:StampOrDateTime})$4\n$0", "description":"format_time(+Out, +Format, +StampOrDateTime).\nModelled after POSIX strftime(), using GNU extensions. Out is a destination as specified with with_output_to/2. Format is an atom or string with the following conversions. Conversions start with a percent (%) character.126Descriptions taken from Linux Programmer's Manual StampOrDateTime is either a numeric time-stamp, a term date(Y,M,D,H,M,S,O,TZ,DST) or a term date(Y,M,D). \n\na The abbreviated weekday name according to the current locale. Use format_time/4 for POSIX locale.\nA The full weekday name according to the current locale. Use format_time/4 for POSIX locale.\nb The abbreviated month name according to the current locale. Use format_time/4 for POSIX locale.\nB The full month name according to the current locale. Use format_time/4 for POSIX locale.\nc The preferred date and time representation for the current locale.\nC The century number (year/100) as a 2-digit integer.\nd The day of the month as a decimal number (range 01 to 31).\nD Equivalent to %m/%d/%y. (For Americans only. Americans should note that in other countries %d/%m/%y is rather common. This means that in an international context this format is ambiguous and should not be used.)\ne Like %d, the day of the month as a decimal number, but a leading zero is replaced by a space.\nE Modifier. Not implemented.\nf Number of microseconds. The f can be prefixed by an integer to print the desired number of digits. E.g., %3f prints milliseconds. This format is not covered by any standard, but available with different format specifiers in various incarnations of the strftime() function.\nF Equivalent to %Y-%m-%d (the ISO 8601 date format).\ng Like %G, but without century, i.e., with a 2-digit year (00-99).\nG The ISO 8601 year with century as a decimal number. The 4-digit year corresponding to the ISO week number (see %V). This has the same format and value as %y, except that if the ISO week number belongs to the previous or next year, that year is used instead.\nV The ISO 8601:1988 week number of the current year as a decimal number, range 01 to 53, where week 1 is the first week that has at least 4 days in the current year, and with Monday as the first day of the week. See also %U and %W.\nh Equivalent to %b.\nH The hour as a decimal number using a 24-hour clock (range 00 to 23).\nI The hour as a decimal number using a 12-hour clock (range 01 to 12).\nj The day of the year as a decimal number (range 001 to 366).\nk The hour (24-hour clock) as a decimal number (range 0 to 23); single digits are preceded by a blank. (See also %H.)\nl The hour (12-hour clock) as a decimal number (range 1 to 12); single digits are preceded by a blank. (See also %I.)\nm The month as a decimal number (range 01 to 12).\nM The minute as a decimal number (range 00 to 59).\nn A newline character.\nO Modifier to select locale-specific output. Not implemented.\np Either `AM' or `PM' according to the given time value, or the corresponding strings for the current locale. Noon is treated as `pm' and midnight as `am'.\nP Like %p but in lowercase: `am' or `pm' or a corresponding string for the current locale.\nr The time in a.m. or p.m. notation. In the POSIX locale this is equivalent to `%I:%M:%S %p'.\nR The time in 24-hour notation (%H:%M). For a version including the seconds, see %T below.\ns The number of seconds since the Epoch, i.e., since 1970-01-01 00:00:00 UTC.\nS The second as a decimal number (range 00 to 60). (The range is up to 60 to allow for occasional leap seconds.)\nt A tab character.\nT The time in 24-hour notation (%H:%M:%S).\nu The day of the week as a decimal, range 1 to 7, Monday being 1. See also %w.\nU The week number of the current year as a decimal number, range 00 to 53, starting with the first Sunday as the first day of week 01. See also %V and %W.\nw The day of the week as a decimal, range 0 to 6, Sunday being 0. See also %u.\nW The week number of the current year as a decimal number, range 00 to 53, starting with the first Monday as the first day of week 01.\nx The preferred date representation for the current locale without the time.\nX The preferred time representation for the current locale without the date.\ny The year as a decimal number without a century (range 00 to 99).\nY The year as a decimal number including the century.\nz The timezone as hour offset from GMT using the format HHmm. Required to emit RFC822-conforming dates (using '%a, %d %b %Y %T %z'). Our implementation supports %:z, which modifies the output to HH:mm as required by XML-Schema. Note that both notations are valid in ISO 8601. The sequence %:z is compatible to the GNU date(1) command.\nZ The timezone or name or abbreviation.\n+ The date and time in date(1) format.\n% A literal `%' character.\n\n The table below gives some format strings for popular time representations. RFC1123 is used by HTTP. The full implementation of http_timestamp/2 as available from library(http/http_header) is here. \n\n\n\nhttp_timestamp(Time, Atom) :-\n stamp_date_time(Time, Date, 'UTC'),\n format_time(atom(Atom),\n '%a, %d %b %Y %T GMT',\n Date, posix).\n\n \n\nStandard Format string xsd '%FT%T%:z' ISO8601 '%FT%T%z' RFC822 '%a, %d %b %Y %T %z' RFC1123 '%a, %d %b %Y %T GMT' ", "prefix":"format_time" }, "format_time/4": { "body":"format_time(${1:Out}, ${2:Format}, ${3:StampOrDateTime}, ${4:Locale})$5\n$0", "description":"format_time(+Out, +Format, +StampOrDateTime, +Locale).\nFormat time given a specified Locale. This predicate is a work-around for lacking proper portable and thread-safe time and locale handling in current C libraries. In its current implementation the only value allowed for Locale is posix, which currently only modifies the behaviour of the a, A, b and B format specifiers. The predicate is used to be able to emit POSIX locale week and month names for emitting standardised time-stamps such as RFC1123.", "prefix":"format_time" }, "free_dtd/1": { "body":"free_dtd(${1:DTD})$2\n$0", "description":"free_dtd(+DTD).\nDeallocate all resources associated to the DTD. Further use of DTD is invalid.", "prefix":"free_dtd" }, "free_sgml_parser/1": { "body":"free_sgml_parser(${1:Parser})$2\n$0", "description":"free_sgml_parser(+Parser).\nDestroy all resources related to the parser. This does not destroy the DTD if the parser was created using the dtd(DTD) option.", "prefix":"free_sgml_parser" }, "free_table/1": { "body":"free_table(${1:Handle})$2\n$0", "description":"free_table(+Handle).\nClose and remove the handle. After this operation, Handle becomes invalid and further references to it causes undefined behaviour. \n\n", "prefix":"free_table" }, "freeze/2": { "body":"freeze(${1:Var}, ${2:Goal})$3\n$0", "description":"freeze(+Var, :Goal).\nDelay the execution of Goal until Var is bound (i.e. is not a variable or attributed variable). If Var is bound on entry freeze/2 is equivalent to call/1. The freeze/2 predicate is realised using an attributed variable associated with the module freeze. Use frozen(Var, Goal) to find out whether and which goals are delayed on Var.", "prefix":"freeze" }, "frozen/2": { "body":"frozen(${1:Var}, ${2:Goal})$3\n$0", "description":"frozen(@Var, -Goal).\nUnify Goal with the goal or conjunction of goals delayed on Var. If no goals are frozen on Var, Goal is unified to true.", "prefix":"frozen" }, "functor/3": { "body":"functor(${1:Term}, ${2:Name}, ${3:Arity})$4\n$0", "description":"[ISO]functor(?Term, ?Name, ?Arity).\nTrue when Term is a term with functor Name/Arity. If Term is a variable it is unified with a new term whose arguments are all different variables (such a term is called a skeleton). If Term is atomic, Arity will be unified with the integer 0, and Name will be unified with Term. Raises instantiation_error() if Term is unbound and Name/Arity is insufficiently instantiated. SWI-Prolog also supports terms with arity 0, as in a() (see section 5. Such terms must be processed using compound_name_arity/3. The predicate functor/3 and =../2 raise a domain_error when faced with these terms. Without this precaution, the inconsistency demonstrated below could happen silently.90Raising a domain error was suggested by Jeff Schultz. \n\n\n\n?- functor(a(), N, A).\nN = a, A = 0.\n?- functor(T, a, 0).\nT = a.\n\n ", "prefix":"functor" }, "garbage_collect/0": { "body":"garbage_collect$1\n$0", "description":"garbage_collect.\nInvoke the global and trail stack garbage collector. Normally the garbage collector is invoked automatically if necessary. Explicit invocation might be useful to reduce the need for garbage collections in time-critical segments of the code. After the garbage collection trim_stacks/0 is invoked to release the collected memory resources.", "prefix":"garbage_collect" }, "garbage_collect_atoms/0": { "body":"garbage_collect_atoms$1\n$0", "description":"garbage_collect_atoms.\nReclaim unused atoms. Normally invoked after agc_margin (a Prolog flag) atoms have been created. On multithreaded versions the actual collection is delayed until there are no threads performing normal garbage collection. In this case garbage_collect_atoms/0 returns immediately. Note that there is no guarantee it will ever happen, as there may always be threads performing garbage collection.", "prefix":"garbage_collect_atoms" }, "garbage_collect_clauses/0": { "body":"garbage_collect_clauses$1\n$0", "description":"garbage_collect_clauses.\nReclaim retracted clauses. During normal operation, retracting a clause implies setting the erased generation to the current generation of the database and increment the generation. Keeping the clause around is both needed to realise the logical update view and deal with the fact that other threads may be executing the clause. Both static and dynamic code is processed this way.48Up to version 7.3.11, dynamic code was handled using reference counts.. The clause garbage collector (CGC) scans the environment stacks of all threads for referenced dirty predicates and at which generation this reference accesses the predicate. It then removes the references for clauses that have been retracted before the oldest access generation from the clause list as well as the secondary clauses indexes of the predicate. If the clause list is not being scanned, the clause references and ultimately the clause itself is reclaimed. \n\nThe clause garbage collector is called under three conditions, (1) after reloading a source file, (2) if the memory occupied by retracted but not yet reclaimed clauses exceeds 12.5% of the program store, or (3) if skipping dead clauses in the clause lists becomes too costly. The cost of clause garbage collection is proportional with the total size of the local stack of all threads (the scanning phase) and the number of clauses in all `dirty' predicates (the reclaiming phase).\n\n", "prefix":"garbage_collect_clauses" }, "gdebug/0": { "body":"gdebug$1\n$0", "description":"gdebug.\nUtility defined as guitracer,debug.", "prefix":"gdebug" }, "gensym:gensym/2": { "body":"gensym(${1:Base}, ${2:Unique})$3\n$0", "description":"gensym(+Base, -Unique).\nGenerate a unique atom from base Base and unify it with Unique. Base should be an atom. The first call will return 1 , the next 2 , etc. Note that this is no guarantee that the atom is unique in the system.", "prefix":"gensym" }, "gensym:reset_gensym/0": { "body":"reset_gensym$1\n$0", "description":"reset_gensym.\nReset gensym for all registered keys. This predicate is available for compatibility only. New code is strongly advised to avoid the use of reset_gensym or at least to reset only the keys used by your program to avoid unexpected side effects on other components.", "prefix":"reset_gensym" }, "gensym:reset_gensym/1": { "body":"reset_gensym(${1:Base})$2\n$0", "description":"reset_gensym(+Base).\nRestart generation of identifiers from Base at 1. Used to make sure a program produces the same results on subsequent runs. Use with care.", "prefix":"reset_gensym" }, "get/1": { "body":"get(${1:Char})$2\n$0", "description":"[deprecated]get(-Char).\nRead the current input stream and unify the next non-blank character with Char. Char is unified with -1 on end of file. The predicate get/1 operates on character codes. See also get0/1.", "prefix":"get" }, "get/2": { "body":"get(${1:Stream}, ${2:Char})$3\n$0", "description":"[deprecated]get(+Stream, -Char).\nRead the next non-blank character from Stream. See also get/1, get0/1 and get0/2.", "prefix":"get" }, "get0/1": { "body":"get0(${1:Char})$2\n$0", "description":"[deprecated]get0(-Char).\nEdinburgh version of the ISO get_code/1 predicate. Note that Edinburgh Prolog didn't support wide characters and therefore technically speaking get0/1 should have been mapped to get_byte/1. The intention of get0/1, however, is to read character codes.", "prefix":"get0" }, "get0/2": { "body":"get0(${1:Stream}, ${2:Char})$3\n$0", "description":"[deprecated]get0(+Stream, -Char).\nEdinburgh version of the ISO get_code/2 predicate. See also get0/1.", "prefix":"get0" }, "get_attr/3": { "body":"get_attr(${1:Var}, ${2:Module}, ${3:Value})$4\n$0", "description":"get_attr(+Var, +Module, -Value).\nRequest the current value for the attribute named Module. If Var is not an attributed variable or the named attribute is not associated to Var this predicate fails silently. If Module is not an atom, a type error is raised.", "prefix":"get_attr" }, "get_attrs/2": { "body":"get_attrs(${1:Var}, ${2:Attributes})$3\n$0", "description":"get_attrs(+Var, -Attributes).\nGet all attributes of Var. Attributes is a term of the form att(Module, Value, MoreAttributes), where MoreAttributes is [] for the last attribute.", "prefix":"get_attrs" }, "get_byte/1": { "body":"get_byte(${1:Byte})$2\n$0", "description":"[ISO]get_byte(-Byte).\nRead the current input stream and unify the next byte with Byte (an integer between 0 and 255). Byte is unified with -1 on end of file.", "prefix":"get_byte" }, "get_byte/2": { "body":"get_byte(${1:Stream}, ${2:Byte})$3\n$0", "description":"[ISO]get_byte(+Stream, -Byte).\nRead the next byte from Stream and unify Byte with an integer between 0 and 255.", "prefix":"get_byte" }, "get_char/1": { "body":"get_char(${1:Char})$2\n$0", "description":"[ISO]get_char(-Char).\nRead the current input stream and unify Char with the next character as a one-character atom. See also atom_chars/2. On end-of-file, Char is unified to the atom end_of_file.", "prefix":"get_char" }, "get_char/2": { "body":"get_char(${1:Stream}, ${2:Char})$3\n$0", "description":"[ISO]get_char(+Stream, -Char).\nUnify Char with the next character from Stream as a one-character atom. See also get_char/2, get_byte/2 and get_code/2.", "prefix":"get_char" }, "get_code/1": { "body":"get_code(${1:Code})$2\n$0", "description":"[ISO]get_code(-Code).\nRead the current input stream and unify Code with the character code of the next character. Code is unified with -1 on end of file. See also get_char/1.", "prefix":"get_code" }, "get_code/2": { "body":"get_code(${1:Stream}, ${2:Code})$3\n$0", "description":"[ISO]get_code(+Stream, -Code).\nRead the next character code from Stream.", "prefix":"get_code" }, "get_dict/3": { "body":"get_dict(${1:Key}, ${2:Dict}, ${3:Value})$4\n$0", "description":"get_dict(?Key, +Dict, -Value).\nUnify the value associated with Key in dict with Value. If Key is unbound, all associations in Dict are returned on backtracking. The order in which the associations are returned is undefined. This predicate is normally accessed using the functional notation Dict.Key. See section 5.4.1.", "prefix":"get_dict" }, "get_dict/5": { "body":"get_dict(${1:Key}, ${2:Dict}, ${3:Value}, ${4:NewDict}, ${5:NewValue})$6\n$0", "description":"[semidet]get_dict(+Key, +Dict, -Value, -NewDict, +NewValue).\nCreate a new dict after updating the value for Key. Fails if Value does not unify with the current value associated with Key. Acts according to the following below. Dict is either a dict or a list the can be converted into a dict. \n\nget_dict(Key, Dict, Value, NewDict, NewDict) :-\n get_dict(Key, Dict, Value),\n put_dict(Key, Dict, NewDict, NewDict).\n\n ", "prefix":"get_dict" }, "get_flag/2": { "body":"get_flag(${1:Key}, ${2:Value})$3\n$0", "description":"get_flag(+Key, -Value).\nTrue when Value is the value currently associated with Key. If Key does not exist, a new flag with value `0' (zero) is created.", "prefix":"get_flag" }, "get_sgml_parser/2": { "body":"get_sgml_parser(${1:Parser}, ${2:Option})$3\n$0", "description":"get_sgml_parser(+Parser, -Option).\nRetrieve infomation on the current status of the parser. Notably useful if the parser is used in the call-back mode. Currently defined options: file(-File): Current file-name. Note that this may be different from the provided file if an external entity is being loaded.\n\nline(-Line): Line-offset from where the parser started its processing in the file-object.\n\ncharpos(-CharPos): Offset from where the parser started its processing in the file-object. See section 6.\n\ncharpos(-Start, -End): Character offsets of the start and end of the source processed causing the current call-back. Used in PceEmacs to for colouring text in SGML and XML modes.\n\nsource(-Stream): Prolog stream being processed. May be used in the on_begin, etc. callbacks from sgml_parse/2.\n\ndialect(-Dialect): Return the current dialect used by the parser (sgml, html, html5, xhtml, xhtml5, xml or xmlns).\n\nevent_class(-Class): The event class can be requested in call-back events. It denotes the cause of the event, providing useful information for syntax highlighting. Defined values are: explicitThe code generating this event is explicitely present in the document.omittedThe current event is caused by the insertion of an omitted tag. This may be a normal event in SGML mode or an error in XML mode.shorttagThe current event (begin or end) is caused by an element written down using the shorttag notation (.shortrefThe current event is caused by the expansion of a shortref. This allows for highlighting shortref strings in the source-text. \n\ndoctype(-Element): Return the defined document-type (= toplevel element). See also set_sgml_parser/2.\n\ndtd(-DTD): Return the currently used DTD. See dtd_property/2 for obtaining information on the DTD such as element and attribute properties.\n\ncontext(-StackOfElements): Returns the stack of currently open elements as a list. The head of this list is the current element. This can be used to determine the context of, for example, CDATA events in call-back mode. The elements are passed as atoms. Currently no access to the attributes is provided.\n\nallowed(-Elements): Determines which elements may be inserted at the current location. This information is returned as a list of element-names. If character data is allowed in the current location, #pcdata is part of Elements. If no element is open, the doctype is returned. This option is intended to support syntax-sensitive editors. Such an editor should load the DTD, find an appropriate starting point and then feed all data between the starting point and the caret into the parser. Next it can use this option to determine the elements allowed at this point. Below is a code fragment illustrating this use given a parser with loaded DTD, an input stream and a start-location. \n\n ...,\n seek(In, Start, bof, _),\n set_sgml_parser(Parser, charpos(Start)),\n set_sgml_parser(Parser, doctype(_)),\n Len is Caret - Start,\n sgml_parse(Parser,\n [ source(In),\n content_length(Len),\n parse(input) % do not complete document\n ]),\n get_sgml_parser(Parser, allowed(Allowed)),\n ...\n\n \n\n ", "prefix":"get_sgml_parser" }, "get_single_char/1": { "body":"get_single_char(${1:Code})$2\n$0", "description":"get_single_char(-Code).\nGet a single character from input stream `user' (regardless of the current input stream). Unlike get_code/1, this predicate does not wait for a return. The character is not echoed to the user's terminal. This predicate is meant for keyboard menu selection, etc. If SWI-Prolog was started with the -tty option this predicate reads an entire line of input and returns the first non-blank character on this line, or the character code of the newline (10) if the entire line consisted of blank characters.", "prefix":"get_single_char" }, "get_string_code/3": { "body":"get_string_code(${1:Index}, ${2:String}, ${3:Code})$4\n$0", "description":"get_string_code(+Index, +String, -Code).\nSemi-deterministic version of string_code/3. In addition, this version provides strict range checking, throwing a domain error if Index is less than 1 or greater than the length of String. ECLiPSe provides this to support String[Index] notation.", "prefix":"get_string_code" }, "get_table_attribute/3": { "body":"get_table_attribute(${1:Handle}, ${2:Attribute}, ${3:Value})$4\n$0", "description":"get_table_attribute(+Handle, +Attribute, -Value).\nFetch attributes of the table. Defined attributes: \n\nfileUnify value with the name of the file with which the table is associated. field(N)Unify value with declaration of n-th (1-based) field. field_separatorUnify value with the field separator character. record_separatorUnify value with the record separator character. key_fieldUnify value with the 1-based index of the field that is sorted or fails if the table contains no sorted fields. field_countUnify value with the total number of columns in the table. sizeUnify value with the number of characters in the table-file, not the number of records. windowUnify value with a term Start - Size, indicating the properties of the current window. ", "prefix":"get_table_attribute" }, "get_time/1": { "body":"get_time(${1:TimeStamp})$2\n$0", "description":"get_time(-TimeStamp).\nReturn the current time as a TimeStamp. The granularity is system-dependent. See section 4.35.2.1.", "prefix":"get_time" }, "getbit/2": { "body":"getbit(${1:IntExprV}, ${2:IntExprI})$3\n$0", "description":"getbit(+IntExprV, +IntExprI).\nEvaluates to the bit value (0 or 1) of the IntExprI-th bit of IntExprV. Both arguments must evaluate to non-negative integers. The result is equivalent to (IntExprV >> IntExprI)/\\1, but more efficient because materialization of the shifted value is avoided. Future versions will optimise (IntExprV >> IntExprI)/\\1 to a call to getbit/2, providing both portability and performance.110This issue was fiercely debated at the ISO standard mailinglist. The name getbit was selected for compatibility with ECLiPSe, the only system providing this support. Richard O'Keefe disliked the name and argued that efficient handling of the above implementation is the best choice for this functionality.", "prefix":"getbit" }, "getenv/2": { "body":"getenv(${1:Name}, ${2:Value})$3\n$0", "description":"getenv(+Name, -Value).\nGet environment variable. Fails silently if the variable does not exist. Please note that environment variable names are case-sensitive on Unix systems and case-insensitive on Windows.", "prefix":"getenv" }, "getpass:getpass/1": { "body": ["getpass(${1:Passwd})$2\n$0" ], "description":" getpass(-Passwd)\n\n Asks the user for a password. Provides feedback as `*' characters\n The password typed is returned as a Prolog list. All intermediate\n results use XPCE strings rather than atoms to avoid finding the\n typed password by inspecting the Prolog or XPCE symbol-table.", "prefix":"getpass" }, "git:git/2": { "body": ["git(${1:Argv}, ${2:Options})$3\n$0" ], "description":" git(+Argv, +Options) is det.\n\n Run a GIT command. Defined options:\n\n * directory(+Dir)\n Execute in the given directory\n * output(-Out)\n Unify Out with a list of codes representing stdout of the\n command. Otherwise the output is handed to print_message/2\n with level =informational=.\n * error(-Error)\n As output(Out), but messages are printed at level =error=.\n * askpass(+Program)\n Export GIT_ASKPASS=Program", "prefix":"git" }, "git:git_branches/2": { "body": ["git_branches(${1:Branches}, ${2:Options})$3\n$0" ], "description":" git_branches(-Branches, +Options) is det.\n\n True when Branches is the list of branches in the repository.\n In addition to the usual options, this processes:\n\n - contains(Commit)\n Return only branches that contain Commit.", "prefix":"git_branches" }, "git:git_commit_data/3": { "body": [ "git_commit_data(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"git_commit_data('Param1','Param2','Param3')", "prefix":"git_commit_data" }, "git:git_default_branch/2": { "body": ["git_default_branch(${1:BranchName}, ${2:Options})$3\n$0" ], "description":" git_default_branch(-BranchName, +Options) is det.\n\n True when BranchName is the default branch of a repository.", "prefix":"git_default_branch" }, "git:git_describe/2": { "body": ["git_describe(${1:Version}, ${2:Options})$3\n$0" ], "description":" git_describe(-Version, +Options) is semidet.\n\n Describe the running version based on GIT tags and hashes.\n Options:\n\n * match(+Pattern)\n Only use tags that match Pattern (a Unix glob-pattern; e.g.\n =|V*|=)\n * directory(Dir)\n Provide the version-info for a directory that is part of\n a GIT-repository.\n * commit(+Commit)\n Describe Commit rather than =HEAD=\n\n @see git describe", "prefix":"git_describe" }, "git:git_hash/2": { "body": ["git_hash(${1:Hash}, ${2:Options})$3\n$0" ], "description":" git_hash(-Hash, +Options) is det.\n\n Return the hash of the indicated object.", "prefix":"git_hash" }, "git:git_log_data/3": { "body": ["git_log_data(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"git_log_data('Param1','Param2','Param3')", "prefix":"git_log_data" }, "git:git_ls_remote/3": { "body": ["git_ls_remote(${1:GitURL}, ${2:Refs}, ${3:Options})$4\n$0" ], "description":" git_ls_remote(+GitURL, -Refs, +Options) is det.\n\n Execute =|git ls-remote|= against the remote repository to fetch\n references from the remote. Options processed:\n\n * heads(Boolean)\n * tags(Boolean)\n * refs(List)\n\n For example, to find the hash of the remote =HEAD=, one can use\n\n ==\n ?- git_ls_remote('git://www.swi-prolog.org/home/pl/git/pl-devel.git',\n Refs, [refs(['HEAD'])]).\n Refs = ['5d596c52aa969d88e7959f86327f5c7ff23695f3'-'HEAD'].\n ==\n\n @param Refs is a list of pairs hash-name.", "prefix":"git_ls_remote" }, "git:git_ls_tree/2": { "body": ["git_ls_tree(${1:Entries}, ${2:Options})$3\n$0" ], "description":" git_ls_tree(-Entries, +Options) is det.\n\n True when Entries is a list of entries in the the GIT\n repository, Each entry is a term:\n\n ==\n object(Mode, Type, Hash, Size, Name)\n ==", "prefix":"git_ls_tree" }, "git:git_open_file/4": { "body": [ "git_open_file(${1:GitRepoDir}, ${2:File}, ${3:Branch}, ${4:Stream})$5\n$0" ], "description":" git_open_file(+GitRepoDir, +File, +Branch, -Stream) is det.\n\n Open the file File in the given bare GIT repository on the given\n branch (treeisch).\n\n @bug We cannot tell whether opening failed for some reason.", "prefix":"git_open_file" }, "git:git_process_output/3": { "body": ["git_process_output(${1:Argv}, ${2:OnOutput}, ${3:Options})$4\n$0" ], "description":" git_process_output(+Argv, :OnOutput, +Options) is det.\n\n Run a git-command and process the output with OnOutput, which is\n called as call(OnOutput, Stream).", "prefix":"git_process_output" }, "git:git_remote_branches/2": { "body": ["git_remote_branches(${1:GitURL}, ${2:Branches})$3\n$0" ], "description":" git_remote_branches(+GitURL, -Branches) is det.\n\n Exploit git_ls_remote/3 to fetch the branches from a remote\n repository without downloading it.", "prefix":"git_remote_branches" }, "git:git_remote_url/3": { "body": ["git_remote_url(${1:Remote}, ${2:URL}, ${3:Options})$4\n$0" ], "description":" git_remote_url(+Remote, -URL, +Options) is det.\n\n URL is the remote (fetch) URL for the given Remote.", "prefix":"git_remote_url" }, "git:git_shortlog/3": { "body": ["git_shortlog(${1:Dir}, ${2:ShortLog}, ${3:Options})$4\n$0" ], "description":" git_shortlog(+Dir, -ShortLog, +Options) is det.\n\n Fetch information like the GitWeb change overview. Processed\n options:\n\n * limit(+Count)\n Maximum number of commits to show (default is 10)\n * path(+Path)\n Only show commits that affect Path. Path is the path of\n a checked out file.\n * git_path(+Path)\n Similar to =path=, but Path is relative to the repository.\n\n @param ShortLog is a list of =git_log= records.", "prefix":"git_shortlog" }, "git:git_show/4": { "body": ["git_show(${1:Dir}, ${2:Hash}, ${3:Commit}, ${4:Options})$5\n$0" ], "description":" git_show(+Dir, +Hash, -Commit, +Options) is det.\n\n Fetch info from a GIT commit. Options processed:\n\n * diff(Diff)\n GIT option on how to format diffs. E.g. =stat=\n * max_lines(Count)\n Truncate the body at Count lines.\n\n @param Commit is a term git_commit(...)-Body. Body is currently\n a list of lines, each line represented as a list of\n codes.", "prefix":"git_show" }, "git:git_tags_on_branch/3": { "body": ["git_tags_on_branch(${1:Dir}, ${2:Branch}, ${3:Tags})$4\n$0" ], "description":" git_tags_on_branch(+Dir, +Branch, -Tags) is det.\n\n Tags is a list of tags in Branch on the GIT repository Dir, most\n recent tag first.\n\n @see Git tricks at http://mislav.uniqpath.com/2010/07/git-tips/", "prefix":"git_tags_on_branch" }, "git:is_git_directory/1": { "body": ["is_git_directory(${1:Directory})$2\n$0" ], "description":" is_git_directory(+Directory) is semidet.\n\n True if Directory is a git directory (Either checked out or\n bare).", "prefix":"is_git_directory" }, "goal_expansion/2": { "body":"goal_expansion(${1:Goal1}, ${2:Goal2})$3\n$0", "description":"goal_expansion(+Goal1, -Goal2).\nLike term_expansion/2, goal_expansion/2 provides for macro expansion of Prolog source code. Between expand_term/2 and the actual compilation, the body of clauses analysed and the goals are handed to expand_goal/2, which uses the goal_expansion/2 hook to do user-defined expansion. The predicate goal_expansion/2 is first called in the module that is being compiled, and then follows the module inheritance path as defined by default_module/2, i.e., by default user and system. If Goal is of the form Module:Goal where Module is instantiated, goal_expansion/2 is called on Goal using rules from module Module followed by default modules for Module. \n\nOnly goals appearing in the body of clauses when reading a source file are expanded using this mechanism, and only if they appear literally in the clause, or as an argument to a defined meta-predicate that is annotated using `0' (see meta_predicate/1). Other cases need a real predicate definition. \n\nThe expansion hook can use prolog_load_context/2 to obtain information about the context in which the goal is exanded such as the module, variable names or the encapsulating term.\n\n", "prefix":"goal_expansion" }, "goal_expansion/4": { "body":"goal_expansion(${1:Goal1}, ${2:Layout1}, ${3:Goal2}, ${4:Layout2})$5\n$0", "description":"goal_expansion(+Goal1, ?Layout1, -Goal2, -Layout2).\n", "prefix":"goal_expansion" }, "ground/1": { "body":"ground(${1:Term})$2\n$0", "description":"[ISO]ground(@Term).\nTrue if Term holds no free variables.", "prefix":"ground" }, "gspy/1": { "body":"gspy(${1:Predicate})$2\n$0", "description":"gspy(+Predicate).\nUtility defined as guitracer,spy(Predicate).", "prefix":"gspy" }, "gtrace/0": { "body":"gtrace$1\n$0", "description":"gtrace.\nUtility defined as guitracer,trace.", "prefix":"gtrace" }, "gui_tracer:gdebug/0": { "body": ["gdebug$1\n$0" ], "description":" gdebug is det.\n\n Same as debug/0, but uses the graphical tracer.", "prefix":"gdebug" }, "gui_tracer:gspy/1": { "body": ["gspy(${1:Spec})$2\n$0" ], "description":" gspy(:Spec) is det.\n\n Same as spy/1, but uses the graphical debugger.", "prefix":"gspy" }, "gui_tracer:gtrace/0": { "body": ["gtrace$1\n$0" ], "description":" gtrace is det.\n\n Like trace/0, but uses the graphical tracer.", "prefix":"gtrace" }, "gui_tracer:gtrace/1": { "body": ["gtrace(${1:Goal})$2\n$0" ], "description":" gtrace(:Goal) is det.\n\n Trace Goal in a separate thread, such that the toplevel remains\n free for user interaction.", "prefix":"gtrace" }, "gui_tracer:guitracer/0": { "body": ["guitracer$1\n$0" ], "description":" guitracer is det.\n\n Enable the graphical debugger. A subsequent call to trace/0\n opens the de debugger window. The tranditional debugger can be\n re-enabled using noguitracer/0.", "prefix":"guitracer" }, "gui_tracer:noguitracer/0": { "body": ["noguitracer$1\n$0" ], "description":" noguitracer is det.\n\n Disable the graphical debugger.\n\n @see guitracer/0", "prefix":"noguitracer" }, "guitracer/0": { "body":"guitracer$1\n$0", "description":"guitracer.\nThis predicate installs the above-mentioned hooks that redirect tracing to the window-based environment. No window appears. The debugger window appears as actual tracing is started through trace/0, by hitting a spy point defined by spy/1 or a break point defined using the PceEmacs command Prolog/Break at (Control-c b).", "prefix":"guitracer" }, "gxref/0": { "body":"gxref$1\n$0", "description":"gxref.\nRun cross-referencer on all currently loaded files and present a graphical overview of the result. As the predicate operates on the currently loaded application it must be run after loading the application.", "prefix":"gxref" }, "gzopen/3": { "body":"gzopen(${1:File}, ${2:Mode}, ${3:Stream})$4\n$0", "description":"gzopen(+File, +Mode, -Stream).\nSame as gzopen(File, Mode, Stream,[]).", "prefix":"gzopen" }, "gzopen/4": { "body":"gzopen(${1:File}, ${2:Mode}, ${3:Stream}, ${4:Options})$5\n$0", "description":"gzopen(+File, +Mode, -Stream, +Options).\nOpen gzip compatible File for reading or writing. If a file is opened in =append= mode, a new gzip image will be added to the end of the file. The gzip standard defines that a file can hold multiple gzip images and inflating the file results in a concatenated stream of all inflated images. Options are passed to open/4 and zopen/3. Default format is gzip.", "prefix":"gzopen" }, "halt/1": { "body":"halt(${1:Status})$2\n$0", "description":"[ISO]halt(+Status).\nTerminate Prolog execution with Status. This predicate calls PL_halt() which preforms the following steps: \n\nSet the Prolog flag exit_status to Status. \nCall all hooks registered using at_halt/1. If Status equals 0 (zero), any of these hooks calls cancel_halt/1, termination is cancelled. \nCall all hooks registered using PL_at_halt(). In the future, if any of these hooks returns non-zero, termination will be cancelled. Currently, this only prints a warning. \nPerform the following system cleanup actions: Cancel all threads, calling thread_at_exit/1 registered termination hooks. Threads not responding within 1 second are cancelled forcefully.Flush I/O and close all streams except for standard I/O.Reset the terminal if its properties were changed.Remove temporary files and incomplete compilation output.Reclaim memory. \nCall exit(Status) to terminate the process\n\n", "prefix":"halt" }, "hash_stream:alarm/3": { "body":"alarm(${1:Time}, ${2:Callable}, ${3:Id})$4\n$0", "description":"alarm(+Time, :Callable, -Id).\nSame as alarm(Time, Callable, Id,[]).", "prefix":"alarm" }, "hash_stream:alarm/4": { "body":"alarm(${1:Time}, ${2:Callable}, ${3:Id}, ${4:Options})$5\n$0", "description":"alarm(+Time, :Callable, -Id, +Options).\nSchedule Callable to be called Time seconds from now. Time is a number (integer or float). Callable is called on the next pass through a call- or redo-port of the Prolog engine, or a call to the PL_handle_signals() routine from SWI-Prolog. Id is unified with a reference to the timer. The resolution of the alarm depends on the underlying implementation, which is based on pthread_cond_timedwait() (on Windows on the pthread emulation thereof). Long-running foreign predicates that do not call PL_handle_signals() may further delay the alarm. The relation to blocking system calls (sleep, reading from slow devices, etc.) is undefined and varies between implementations. \n\nOptions is a list of Name(Value) terms. Defined options are: \n\nremove(Bool): If true (default false), the timer is removed automatically after fireing. Otherwise it must be destroyed explicitly using remove_alarm/1.\n\ninstall(Bool): If false (default true), the timer is allocated but not scheduled for execution. It must be started later using install_alarm/1.\n\n ", "prefix":"alarm" }, "hash_stream:alarm_at/4": { "body":"alarm_at(${1:Time}, ${2:Callable}, ${3:Id}, ${4:Options})$5\n$0", "description":"alarm_at(+Time, :Callable, -Id, +Options).\nas alarm/3, but Time is the specification of an absolute point in time. Absolute times are specified in seconds after the Jan 1, 1970 epoch. See also date_time_stamp/2.", "prefix":"alarm_at" }, "hash_stream:atom_to_memory_file/2": { "body":"atom_to_memory_file(${1:Atom}, ${2:Handle})$3\n$0", "description":"atom_to_memory_file(+Atom, -Handle).\nTurn an atom into a read-only memory-file containing the (shared) characters of the atom. Opening this memory-file in mode write yields a permission error.", "prefix":"atom_to_memory_file" }, "hash_stream:call_with_time_limit/2": { "body":"call_with_time_limit(${1:Time}, ${2:Goal})$3\n$0", "description":"call_with_time_limit(+Time, :Goal).\nTrue if Goal completes within Time seconds. Goal is executed as in once/1. If Goal doesn't complete within Time seconds (wall time), exit using the exception time_limit_exceeded. See catch/3. Please note that this predicate uses alarm/4 and therefore its effect on long-running foreign code and system calls is undefined. Blocking I/O can be handled using the timeout option of read_term/3.\n\n", "prefix":"call_with_time_limit" }, "hash_stream:current_alarm/4": { "body":"current_alarm(${1:At}, ${2:Callable}, ${3:Id}, ${4:Status})$5\n$0", "description":"current_alarm(?At, ?:Callable, ?Id, ?Status).\nEnumerate the not-yet-removed alarms. Status is one of done if the alarm has been called, next if it is the next to be fired and scheduled otherwise.", "prefix":"current_alarm" }, "hash_stream:delete_memory_file/3": { "body":"delete_memory_file(${1:Handle}, ${2:Offset}, ${3:Length})$4\n$0", "description":"delete_memory_file(+Handle, +Offset, +Length).\nDelete a Length characters from the memory file, starting at Offset. This predicate raises a domain_error exception if Offset or Offset+Length is out of range and a permission_error if the memory file is read-only or opened.", "prefix":"delete_memory_file" }, "hash_stream:free_memory_file/1": { "body":"free_memory_file(${1:Handle})$2\n$0", "description":"free_memory_file(+Handle).\nDiscard the memory file and its contents. If the file is open it is first closed.", "prefix":"free_memory_file" }, "hash_stream:insert_memory_file/3": { "body":"insert_memory_file(${1:Handle}, ${2:Offset}, ${3:Data})$4\n$0", "description":"insert_memory_file(+Handle, +Offset, +Data).\nInsert Data into the memory file at location Offset. The offset is specified in characters. Data can be an atom, string, code or character list. Other terms are first serialized using writeq/1. This predicate raises a domain_error exception if Offset is out of range and a permission_error if the memory file is read-only or opened.", "prefix":"insert_memory_file" }, "hash_stream:install_alarm/1": { "body":"install_alarm(${1:Id})$2\n$0", "description":"install_alarm(+Id).\nActivate an alarm allocated using alarm/4 with the option install(false) or stopped using uninstall_alarm/1.", "prefix":"install_alarm" }, "hash_stream:install_alarm/2": { "body":"install_alarm(${1:Id}, ${2:Time})$3\n$0", "description":"install_alarm(+Id, +Time).\nAs install_alarm/1, but specifies a new (relative) timeout value.", "prefix":"install_alarm" }, "hash_stream:md5_hash/3": { "body":"md5_hash(${1:Data}, ${2:Hash}, ${3:Options})$4\n$0", "description":"[det]md5_hash(+Data, -Hash, +Options).\nHash is the MD5 hash of Data, The conversion is controlled by Options: encoding(+Encoding): If Data is a sequence of character codes, this must be translated into a sequence of bytes, because that is what the hashing requires. The default encoding is utf8. The other meaningful value is octet, claiming that Data contains raw bytes.\n\n Data is either an atom, string, code-list or char-list. Hash is an atom holding 32 characters, representing the hash in hexadecimal notation ", "prefix":"md5_hash" }, "hash_stream:memory_file_line_position/4": { "body":"memory_file_line_position(${1:MF}, ${2:Line}, ${3:LinePos}, ${4:Offset})$5\n$0", "description":"memory_file_line_position(+MF, ?Line, ?LinePos, ?Offset).\nTrue if the character offset Offset corresponds with the LinePos character on line Line. Lines are counted from one (1). Note that LinePos is not the column as each character counts for one, including backspace and tab.", "prefix":"memory_file_line_position" }, "hash_stream:memory_file_substring/5": { "body":"memory_file_substring(${1:Handle}, ${2:Before}, ${3:Length}, ${4:After}, ${5:SubString})$6\n$0", "description":"memory_file_substring(+Handle, ?Before, ?Length, ?After, -SubString).\nSubString is a substring of the memory file. There are Before characters in the memory file before SubString, SubString contains Length character and is followed by After characters in the memory file. The signature is the same as sub_string/5 and sub_atom/5, but currently at least two of the 3 position arguments must be specified. Future versions might implement the full functionality of sub_string/5.", "prefix":"memory_file_substring" }, "hash_stream:memory_file_to_atom/2": { "body":"memory_file_to_atom(${1:Handle}, ${2:Atom})$3\n$0", "description":"memory_file_to_atom(+Handle, -Atom).\nReturn the content of the memory-file in Atom.", "prefix":"memory_file_to_atom" }, "hash_stream:memory_file_to_atom/3": { "body":"memory_file_to_atom(${1:Handle}, ${2:Atom}, ${3:Encoding})$4\n$0", "description":"memory_file_to_atom(+Handle, -Atom, +Encoding).\nReturn the content of the memory-file in Atom, pretending the data is in the given Encoding. This can be used to convert from one encoding into another, typically from/to bytes. For example, if we must convert a set of bytes that contain text in UTF-8, open the memory file as octet stream, fill it, and get the result using Encoding is utf8.", "prefix":"memory_file_to_atom" }, "hash_stream:memory_file_to_codes/2": { "body":"memory_file_to_codes(${1:Handle}, ${2:Codes})$3\n$0", "description":"memory_file_to_codes(+Handle, -Codes).\nReturn the content of the memory-file as a list of character-codes in Codes.", "prefix":"memory_file_to_codes" }, "hash_stream:memory_file_to_codes/3": { "body":"memory_file_to_codes(${1:Handle}, ${2:Codes}, ${3:Encoding})$4\n$0", "description":"memory_file_to_codes(+Handle, -Codes, +Encoding).\nReturn the content of the memory-file as a list of character-codes in Codes, pretending the data is in the given Encoding.", "prefix":"memory_file_to_codes" }, "hash_stream:memory_file_to_string/2": { "body":"memory_file_to_string(${1:Handle}, ${2:String})$3\n$0", "description":"memory_file_to_string(+Handle, -String).\nReturn the content of the memory-file as a string in -String.", "prefix":"memory_file_to_string" }, "hash_stream:memory_file_to_string/3": { "body":"memory_file_to_string(${1:Handle}, ${2:String}, ${3:Encoding})$4\n$0", "description":"memory_file_to_string(+Handle, -String, +Encoding).\nReturn the content of the memory-file as a string in String, pretending the data is in the given Encoding.", "prefix":"memory_file_to_string" }, "hash_stream:new_memory_file/1": { "body":"new_memory_file(${1:Handle})$2\n$0", "description":"new_memory_file(-Handle).\nCreate a new memory file and return a unique opaque handle to it.", "prefix":"new_memory_file" }, "hash_stream:open_hash_stream/3": { "body":"open_hash_stream(${1:OrgStream}, ${2:HashStream}, ${3:Options})$4\n$0", "description":"[det]open_hash_stream(+OrgStream, -HashStream, +Options).\nOpen a filter stream on OrgStream that maintains a hash. The hash can be retrieved at any time using stream_hash/2. Provided options: algorithm(+Algorithm): One of md5, sha1, sha224, sha256, sha384 or sha512. Default is sha1.\n\nclose_parent(+Bool): If true (default), closing the filter stream also closes the original (parent) stream.\n\n ", "prefix":"open_hash_stream" }, "hash_stream:open_memory_file/3": { "body":"open_memory_file(${1:Handle}, ${2:Mode}, ${3:Stream})$4\n$0", "description":"open_memory_file(+Handle, +Mode, -Stream).\nOpen the memory-file. Mode is one of read, write, append, update or insert. The resulting Stream must be closed using close/1. When opened for update or insert, the current location is initialized at the start of the data and can be modified using seek/2 or set_stream_position/2. In update mode, existing content is replaced, while the size is enlarged after hitting the end of the data. In insert mode, the new data is inserted at the current point.", "prefix":"open_memory_file" }, "hash_stream:open_memory_file/4": { "body":"open_memory_file(${1:Handle}, ${2:Mode}, ${3:Stream}, ${4:Options})$5\n$0", "description":"open_memory_file(+Handle, +Mode, -Stream, +Options).\nOpen a memory-file as open_memory_file/3. Options: encoding(+Encoding): Set the encoding for a memory file and the created stream. Encoding names are the same as used with open/4. By default, memoryfiles represent UTF-8 streams, making them capable of storing arbitrary Unicode text. In practice the only alternative is octet, turning the memoryfile into binary mode. Please study SWI-Prolog Unicode and encoding issues before using this option.\n\nfree_on_close(+Bool): If true (default false and the memory file is opened for reading, discard the file (see free_memory_file/1) if the input is closed. This is used to realise open_chars_stream/2 in library(charsio).\n\n ", "prefix":"open_memory_file" }, "hash_stream:remove_alarm/1": { "body":"remove_alarm(${1:Id})$2\n$0", "description":"remove_alarm(+Id).\nRemove an alarm. If it is not yet fired, it will not be fired any more.", "prefix":"remove_alarm" }, "hash_stream:size_memory_file/2": { "body":"size_memory_file(${1:Handle}, ${2:Size})$3\n$0", "description":"size_memory_file(+Handle, -Size).\nReturn the content-length of the memory-file in characters in the current encoding of the memory file. The file should be closed and contain data.", "prefix":"size_memory_file" }, "hash_stream:size_memory_file/3": { "body":"size_memory_file(${1:Handle}, ${2:Size}, ${3:Encoding})$4\n$0", "description":"size_memory_file(+Handle, -Size, +Encoding).\nReturn the content-length of the memory-file it characters in the given Encoding. The file should be closed and contain data.", "prefix":"size_memory_file" }, "hash_stream:stream_hash/2": { "body":"stream_hash(${1:HashStream}, ${2:Digest})$3\n$0", "description":"[det]stream_hash(+HashStream, -Digest:atom).\nUnify Digest with a hash for the bytes send to or read from HashStream. Note that the hash is computed on the stream buffers. If the stream is an output stream, it is first flushed and the Digest represents the hash at the current location. If the stream is an input stream the Digest represents the hash of the processed input including the already buffered data.", "prefix":"stream_hash" }, "hash_stream:uninstall_alarm/1": { "body":"uninstall_alarm(${1:Id})$2\n$0", "description":"uninstall_alarm(+Id).\nDeactivate a running alarm, but do not invalidate the alarm identifier. Later, the alarm can be reactivated using either install_alarm/1 or install_alarm/2. Reinstalled using install_alarm/1, it will fire at the originally scheduled time. Reinstalled using install_alarm/2 causes the alarm to fire at the specified time from now.", "prefix":"uninstall_alarm" }, "heaps:add_to_heap/4": { "body": ["add_to_heap(${1:Heap0}, ${2:Priority}, ${3:Key}, ${4:Heap})$5\n$0" ], "description":" add_to_heap(+Heap0, +Priority, ?Key, -Heap) is semidet.\n\n Adds Key with priority Priority to Heap0, constructing a new\n heap in Heap.", "prefix":"add_to_heap" }, "heaps:delete_from_heap/4": { "body": [ "delete_from_heap(${1:Heap0}, ${2:Priority}, ${3:Key}, ${4:Heap})$5\n$0" ], "description":" delete_from_heap(+Heap0, -Priority, +Key, -Heap) is semidet.\n\n Deletes Key from Heap0, leaving its priority in Priority and the\n resulting data structure in Heap. Fails if Key is not found in\n Heap0.\n\n @bug This predicate is extremely inefficient and exists only for\n SICStus compatibility.", "prefix":"delete_from_heap" }, "heaps:empty_heap/1": { "body": ["empty_heap(${1:Heap})$2\n$0" ], "description":" empty_heap(?Heap) is semidet.\n\n True if Heap is an empty heap. Complexity: constant.", "prefix":"empty_heap" }, "heaps:get_from_heap/4": { "body": [ "get_from_heap(${1:Heap0}, ${2:Priority}, ${3:Key}, ${4:Heap})$5\n$0" ], "description":" get_from_heap(?Heap0, ?Priority, ?Key, -Heap) is semidet.\n\n Retrieves the minimum-priority pair Priority-Key from Heap0.\n Heap is Heap0 with that pair removed. Complexity: logarithmic\n (amortized), linear in the worst case.", "prefix":"get_from_heap" }, "heaps:heap_size/2": { "body": ["heap_size(${1:Heap}, ${2:Size})$3\n$0" ], "description":" heap_size(+Heap, -Size:int) is det.\n\n Determines the number of elements in Heap. Complexity: constant.", "prefix":"heap_size" }, "heaps:heap_to_list/2": { "body": ["heap_to_list(${1:Heap}, ${2:List})$3\n$0" ], "description":" heap_to_list(+Heap, -List:list) is det.\n\n Constructs a list List of Priority-Element terms, ordered by\n (ascending) priority. Complexity: $O(n \\log n)$.", "prefix":"heap_to_list" }, "heaps:is_heap/1": { "body": ["is_heap(${1:X})$2\n$0" ], "description":" is_heap(+X) is semidet.\n\n Returns true if X is a heap. Validates the consistency of the\n entire heap. Complexity: linear.", "prefix":"is_heap" }, "heaps:list_to_heap/2": { "body": ["list_to_heap(${1:List}, ${2:Heap})$3\n$0" ], "description":" list_to_heap(+List:list, -Heap) is det.\n\n If List is a list of Priority-Element terms, constructs a heap\n out of List. Complexity: linear.", "prefix":"list_to_heap" }, "heaps:merge_heaps/3": { "body": ["merge_heaps(${1:Heap0}, ${2:Heap1}, ${3:Heap})$4\n$0" ], "description":" merge_heaps(+Heap0, +Heap1, -Heap) is det.\n\n Merge the two heaps Heap0 and Heap1 in Heap. Complexity: constant.", "prefix":"merge_heaps" }, "heaps:min_of_heap/3": { "body": ["min_of_heap(${1:Heap}, ${2:Priority}, ${3:Key})$4\n$0" ], "description":" min_of_heap(+Heap, ?Priority, ?Key) is semidet.\n\n Unifies Key with the minimum-priority element of Heap and\n Priority with its priority value. Complexity: constant.", "prefix":"min_of_heap" }, "heaps:min_of_heap/5": { "body": [ "min_of_heap(${1:Heap}, ${2:Priority1}, ${3:Key1}, ${4:Priority2}, ${5:Key2})$6\n$0" ], "description":" min_of_heap(+Heap, ?Priority1, ?Key1, ?Priority2, ?Key2) is semidet.\n\n Gets the two minimum-priority elements from Heap. Complexity: logarithmic\n (amortized).\n\n Do not use this predicate; it exists for compatibility with earlier\n implementations of this library and the SICStus counterpart. It performs\n a linear amount of work in the worst case that a following get_from_heap\n has to re-do.", "prefix":"min_of_heap" }, "heaps:singleton_heap/3": { "body": ["singleton_heap(${1:Heap}, ${2:Priority}, ${3:Key})$4\n$0" ], "description":" singleton_heap(?Heap, ?Priority, ?Key) is semidet.\n\n True if Heap is a heap with the single element Priority-Key.\n\n Complexity: constant.", "prefix":"singleton_heap" }, "help/0": { "body":"help$1\n$0", "description":"help.\nEquivalent to help(help/1).", "prefix":"help" }, "help/1": { "body":"help(${1:What})$2\n$0", "description":"help(+What).\nShow specified part of the manual. What is one of: / Give help on specified predicate Give help on named predicate with any arity or C interface function with that name
Display specified section. Section numbers are dash-separated numbers: 2-3 refers to section 2.3 of the manual. Section numbers are obtained using apropos/1. Examples:\n\n?- help(assert). Give help on predicate assert ?- help(3-4). Display section 3.4 of the manual ?- help('PL_retry').Give help on interface function PL_retry() See also apropos/1 and the SWI-Prolog home page at http://www.swi-prolog.org, which provides a FAQ, an HTML version of the manual for online browsing, and HTML and PDF versions for downloading.\n\n", "prefix":"help" }, "help_index:function/3": { "body": ["function(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"function('Param1','Param2','Param3')", "prefix":"function" }, "help_index:predicate/5": { "body": [ "predicate(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'})$6\n$0" ], "description":"predicate('Param1','Param2','Param3','Param4','Param5')", "prefix":"predicate" }, "help_index:section/4": { "body": [ "section(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"section('Param1','Param2','Param3','Param4')", "prefix":"section" }, "http/authenticate:http_authenticate/3": { "body": ["http_authenticate(${1:Type}, ${2:Request}, ${3:Fields})$4\n$0" ], "description":" http_authenticate(+Type, +Request, -Fields)\n\n True if Request contains the information to continue according\n to Type. Type identifies the required authentication technique:\n\n * basic(+PasswordFile)\n Use HTTP =Basic= authetication and verify the password\n from PasswordFile. PasswordFile is a file holding\n usernames and passwords in a format compatible to\n Unix and Apache. Each line is record with =|:|=\n separated fields. The first field is the username and\n the second the password _hash_. Password hashes are\n validated using crypt/2.\n\n Successful authorization is cached for 60 seconds to avoid\n overhead of decoding and lookup of the user and password data.\n\n http_authenticate/3 just validates the header. If authorization\n is not provided the browser must be challenged, in response to\n which it normally opens a user-password dialogue. Example code\n realising this is below. The exception causes the HTTP wrapper\n code to generate an HTTP 401 reply.\n\n ==\n ( http_authenticate(basic(passwd), Request, Fields)\n -> true\n ; throw(http_reply(authorise(basic, Realm)))\n ).\n ==\n\n @param Fields is a list of fields from the password-file entry.\n The first element is the user. The hash is skipped.\n @tbd Should we also cache failures to reduce the risc of\n DoS attacks?", "prefix":"http_authenticate" }, "http/authenticate:http_authorization_data/2": { "body": ["http_authorization_data(${1:AuthorizeText}, ${2:Data})$3\n$0" ], "description":" http_authorization_data(+AuthorizeText, ?Data) is semidet.\n\n Decode the HTTP =Authorization= header. Data is a term\n\n Method(User, Password)\n\n where Method is the (downcased) authorization method (typically\n =basic=), User is an atom holding the user name and Password is\n a list of codes holding the password", "prefix":"http_authorization_data" }, "http/authenticate:http_current_user/3": { "body": ["http_current_user(${1:File}, ${2:User}, ${3:Fields})$4\n$0" ], "description":" http_current_user(+File, ?User, ?Fields) is nondet.\n\n True when User is present in the htpasswd file File and Fields\n provides the additional fields.\n\n @arg Fields are the fields from the password file File,\n converted using name/2, which means that numeric values\n are passed as numbers and other fields as atoms. The\n password hash is the first element of Fields and is\n a string.", "prefix":"http_current_user" }, "http/authenticate:http_read_passwd_file/2": { "body": ["http_read_passwd_file(${1:Path}, ${2:Data})$3\n$0" ], "description":" http_read_passwd_file(+Path, -Data) is det.\n\n Read a password file. Data is a list of terms of the format\n below, where User is an atom identifying the user, Hash is a\n string containing the salted password hash and Fields contain\n additional fields. The string value of each field is converted\n using name/2 to either a number or an atom.\n\n ==\n passwd(User, Hash, Fields)\n ==", "prefix":"http_read_passwd_file" }, "http/authenticate:http_write_passwd_file/2": { "body": ["http_write_passwd_file(${1:File}, ${2:Data})$3\n$0" ], "description":" http_write_passwd_file(+File, +Data:list) is det.\n\n Write password data Data to File. Data is a list of entries as\n below. See http_read_passwd_file/2 for details.\n\n ==\n passwd(User, Hash, Fields)\n ==\n\n @tbd Write to a new file and atomically replace the old one.", "prefix":"http_write_passwd_file" }, "http/html_head:html_current_resource/1": { "body":"html_current_resource(${1:About})$2\n$0", "description":"[nondet]html_current_resource(?About).\nTrue when About is a currently known resource.", "prefix":"html_current_resource" }, "http/html_head:html_requires/1": { "body":"html_requires(${1:ResourceOrList})$2\n$0", "description":"[det]html_requires(+ResourceOrList)//.\nInclude ResourceOrList and all dependencies derived from it and add them to the HTML head using html_post/2. The actual dependencies are computed during the HTML output phase by html_insert_resource/3.", "prefix":"html_requires" }, "http/html_head:html_requires/3": { "body": ["html_requires(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"html_requires('Param1','Param2','Param3')", "prefix":"html_requires" }, "http/html_head:html_resource/2": { "body":"html_resource(${1:About}, ${2:Properties})$3\n$0", "description":"[det]html_resource(+About, +Properties).\nRegister an HTML head resource. About is either an atom that specifies an HTTP location or a term Alias(Sub). This works similar to absolute_file_name/2. See http:location_path/2 for details. Recognised properties are: requires(+Requirements): Other required script and css files. If this is a plain file name, it is interpreted relative to the declared resource. Requirements can be a list, which is equivalent to multiple requires properties.\n\nvirtual(+Bool): If true (default false), do not include About itself, but only its dependencies. This allows for defining an alias for one or more resources.\n\nordered(+Bool): Defines that the list of requirements is ordered, which means that each requirement in the list depends on its predecessor.\n\naggregate(+List): States that About is an aggregate of the resources in List. This means that if both About and one of the elements of List appears in the dependencies, About is kept and the smaller one is dropped. If there are a number of dependencies on the small members, these are replaced with dependency on the big (aggregate) one, for example, to specify that a big javascript is actually the composition of a number of smaller ones.\n\nmime_type(-Mime): May be specified for non-virtual resources to specify the mime-type of the resource. By default, the mime type is derived from the file name using file_mime_type/2.\n\n Registering the same About multiple times extends the properties defined for About. In particular, this allows for adding additional dependencies to a (virtual) resource.\n\n", "prefix":"html_resource" }, "http/html_head:js_arg/1": { "body":"js_arg(${1:Expression})$2\n$0", "description":"[semidet]js_arg(+Expression)//.\nSame as js_expression/3, but fails if Expression is invalid, where js_expression/3 raises an error. deprecated: New code should use js_expression/3.\n\n ", "prefix":"js_arg" }, "http/html_head:mime_include/2": { "body":"mime_include(${1:Mime}, ${2:Path})$3\n$0", "description":"[semidet,multifile]mime_include(+Mime, +Path)//.\nHook called to include a link to an HTML resource of type Mime into the HTML head. The Mime type is computed from Path using file_mime_type/2. If the hook fails, two built-in rules for text/css and text/javascript are tried. For example, to include a =.pl= files as a Prolog script, use: \n\n:- multifile\n html_head:mime_include//2.\n\nhtml_head:mime_include(text/'x-prolog', Path) --> !,\n html(script([ type('text/x-prolog'),\n src(Path)\n ], [])).\n\n \n\n", "prefix":"mime_include" }, "http/html_quasi_quotations:html/4": { "body": ["html(${1:Content}, ${2:Vars}, ${3:VarDict}, ${4:DOM})$5\n$0" ], "description":" html(+Content, +Vars, +VarDict, -DOM) is det.\n\n The predicate html/4 implements HTML quasi quotations. These\n quotations produce a DOM term that is suitable for html//1 and\n other predicates that are declared to consume this format. The\n quasi quoter only accepts valid, but possibly partial HTML\n documents. The document *must* begin with a tag. The quoter\n replaces attributes or content whose value is a Prolog variable\n that appears in the argument list of the =html= indicator. If\n the variable defines content, it must be the only content. Here\n is an example, replacing both a content element and an\n attribute. Note that the document is valid HTML.\n\n ==\n html({|html(Name, URL)||\n

Dear Name<\/span>,\n\n

You can download<\/a> the requested\n article now.\n |}\n ==", "prefix":"html" }, "http/html_write:html/1": { "body":"html(${1:Spec})$2\n$0", "description":"html(:Spec)//.\nThe DCG non-terminal html/3 is the main predicate of this library. It translates the specification for an HTML page into a list of atoms that can be written to a stream using print_html/[1,2]. The expansion rules of this predicate may be extended by defining the multifile DCG html_write:expand//1. Spec is either a single specification or a list of single specifications. Using nested lists is not allowed to avoid ambiguity caused by the atom [] \n\nAtomic data Atomic data is quoted using html_quoted/3. \nFmt - Args Fmt and Args are used as format-specification and argument list to format/3. The result is quoted and added to the output list. \n\\List Escape sequence to add atoms directly to the output list. This can be used to embed external HTML code or emit script output. List is a list of the following terms: Fmt - Args Fmt and Args are used as format-specification and argument list to format/3. The result is added to the output list.Atomic Atomic values are added directly to the output list. \n\\Term Invoke the non-terminal Term in the calling module. This is the common mechanism to realise abstraction and modularisation in generating HTML. \nModule:Term Invoke the non-terminal :. This is similar to \\Term but allows for invoking grammar rules in external packages. \n&(Entity) Emit &; or &#; if Entity is an integer. SWI-Prolog atoms and strings are represented as Unicode. Explicit use of this construct is rarely needed because code-points that are not supported by the output encoding are automatically converted into character-entities. \nTag(Content) Emit HTML element Tag using Content and no attributes. Content is handed to html/3. See section 3.19.4 for details on the automatically generated layout. \nTag(Attributes, Content) Emit HTML element Tag using Attributes and Content. Attributes is either a single attribute of a list of attributes. Each attributes is of the format Name(Value) or Name=Value. Value is the atomic attribute value but allows for a limited functional notation: A + B Concatenation of A and BFormat-Arguments Use format/3 and emit the result as quoted value.encode(Atom) Use uri_encoded/3 to create a valid URL query component.location_by_id(ID) HTTP location of the HTTP handler with given ID. See http_location_by_id/2.A + List List is handled as a URL `search' component. The list members are terms of the format Name = Value or Name(Value). Values are encoded as in the encode option described above.List Emit SGML multi-valued attributes (e.g., NAMES). Each value in list is separated by a space. This is particularly useful for setting multiple class attributes on an element. For example: ... span(class([c1,c2]), ...), The example below generates a URL that references the predicate set_lang/1 in the application with given parameters. The http_handler/3 declaration binds /setlang to the predicate set_lang/1 for which we provide a very simple implementation. The code between ... is part of an HTML page showing the english flag which, when pressed, calls set_lang(Request) where Request contains the search parameter lang = en. Note that the HTTP location (path) /setlang can be moved without affecting this code. :- http_handler('/setlang', set_lang, []). set_lang(Request) :- http_parameters(Request, [ lang(Lang, []) ]), http_session_retractall(lang(_)), http_session_assert(lang(Lang)), reply_html_page(title('Switched language'), p(['Switch language to ', Lang])). ... html(a(href(location_by_id(set_lang) + [lang(en)]), img(src('/www/images/flags/en.png')))), ... \n\n", "prefix":"html" }, "http/html_write:html/3": { "body": ["html(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"html('Param1','Param2','Param3')", "prefix":"html" }, "http/html_write:html/4": { "body": [ "html(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"html('Param1','Param2','Param3','Param4')", "prefix":"html" }, "http/html_write:html_begin/1": { "body":"html_begin(${1:Begin})$2\n$0", "description":"html_begin(+Begin)//.\nJust open the given element. Begin is either an atom or a compound term, In the latter case the arguments are used as arguments to the begin-tag. Some examples: \n\n html_begin(table)\n html_begin(table(border(2), align(center)))\n\n This predicate provides an alternative to using the \\Command syntax in the html/3 specification. The following two fragments are the same. The preferred solution depends on your preferences as well as whether the specification is generated or entered by the programmer. \n\n\n\ntable(Rows) -->\n html(table([border(1), align(center), width('80%')],\n [ \\table_header,\n \\table_rows(Rows)\n ])).\n\n% or\n\ntable(Rows) -->\n html_begin(table(border(1), align(center), width('80%'))),\n table_header,\n table_rows,\n html_end(table).\n\n ", "prefix":"html_begin" }, "http/html_write:html_begin/3": { "body": ["html_begin(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"html_begin('Param1','Param2','Param3')", "prefix":"html_begin" }, "http/html_write:html_current_option/1": { "body": ["html_current_option(${1:Option})$2\n$0" ], "description":" html_current_option(?Option) is nondet.\n\n True if Option is an active option for the HTML generator.", "prefix":"html_current_option" }, "http/html_write:html_end/1": { "body":"html_end(${1:End})$2\n$0", "description":"html_end(+End)//.\nEnd an element. See html_begin/1 for details.", "prefix":"html_end" }, "http/html_write:html_end/3": { "body": ["html_end(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"html_end('Param1','Param2','Param3')", "prefix":"html_end" }, "http/html_write:html_meta/1": { "body": ["html_meta(${1:Heads})$2\n$0" ], "description":" html_meta(+Heads) is det.\n\n This directive can be used to declare that an HTML rendering\n rule takes HTML content as argument. It has two effects. It\n emits the appropriate meta_predicate/1 and instructs the\n built-in editor (PceEmacs) to provide proper colouring for the\n arguments. The arguments in Head are the same as for\n meta_predicate or can be constant =html=. For example:\n\n ==\n :- html_meta\n page(html,html,?,?).\n ==", "prefix":"html_meta" }, "http/html_write:html_post/2": { "body":"html_post(${1:Id}, ${2:HTML})$3\n$0", "description":"[det]html_post(+Id, :HTML)//.\nReposition HTML to the receiving Id. The html_post/4 call processes HTML using html/3. Embedded \\-commands are executed by mailman/1 from print_html/1 or html_print_length/2. These commands are called in the calling context of the html_post/4 call. A typical usage scenario is to get required CSS links in the document head in a reusable fashion. First, we define css/3 as: \n\n\n\ncss(URL) -->\n html_post(css,\n link([ type('text/css'),\n rel('stylesheet'),\n href(URL)\n ])).\n\n Next we insert the unique CSS links, in the pagehead using the following call to reply_html_page/2: \n\n\n\n reply_html_page([ title(...),\n \\html_receive(css)\n ],\n ...)\n\n ", "prefix":"html_post" }, "http/html_write:html_post/4": { "body": [ "html_post(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"html_post('Param1','Param2','Param3','Param4')", "prefix":"html_post" }, "http/html_write:html_print_length/2": { "body":"html_print_length(${1:List}, ${2:Length})$3\n$0", "description":"html_print_length(+List, -Length).\nWhen calling html_print/[1,2] on List, Length characters will be produced. Knowing the length is needed to provide the Content-length field of an HTTP reply-header.", "prefix":"html_print_length" }, "http/html_write:html_quoted/1": { "body":"html_quoted(${1:Atom})$2\n$0", "description":"html_quoted(+Atom)//.\nEmit the text in Atom, inserting entity-references for the SGML special characters <&>.", "prefix":"html_quoted" }, "http/html_write:html_quoted/3": { "body": ["html_quoted(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"html_quoted('Param1','Param2','Param3')", "prefix":"html_quoted" }, "http/html_write:html_quoted_attribute/1": { "body":"html_quoted_attribute(${1:Atom})$2\n$0", "description":"html_quoted_attribute(+Atom)//.\nEmit the text in Atom suitable for use as an SGML attribute, inserting entity-references for the SGML special characters <&>\".", "prefix":"html_quoted_attribute" }, "http/html_write:html_quoted_attribute/3": { "body": [ "html_quoted_attribute(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"html_quoted_attribute('Param1','Param2','Param3')", "prefix":"html_quoted_attribute" }, "http/html_write:html_receive/1": { "body":"html_receive(${1:Id})$2\n$0", "description":"[det]html_receive(+Id)//.\nReceive posted HTML tokens. Unique sequences of tokens posted with html_post/4 are inserted at the location where html_receive/3 appears. See also: - The local predicate sorted_html/3 handles the output of html_receive/3. - html_receive/4 allows for post-processing the posted material.\n\n ", "prefix":"html_receive" }, "http/html_write:html_receive/2": { "body":"html_receive(${1:Id}, ${2:Handler})$3\n$0", "description":"[det]html_receive(+Id, :Handler)//.\nThis extended version of html_receive/3 causes Handler to be called to process all messages posted to the channal at the time output is generated. Handler is a grammar rule that is called with three extra arguments. \n\nA list of Module:Term, of posted terms. Module is the contest module of html_post and Term is the unmodified term. Members are in the order posted and may contain duplicates.\nDCG input list. The final output must be produced by a call to html/3.\nDCG output list.\n\n Typically, Handler collects the posted terms, creating a term suitable for html/3 and finally calls html/3.\n\n", "prefix":"html_receive" }, "http/html_write:html_receive/3": { "body": ["html_receive(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"html_receive('Param1','Param2','Param3')", "prefix":"html_receive" }, "http/html_write:html_receive/4": { "body": [ "html_receive(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"html_receive('Param1','Param2','Param3','Param4')", "prefix":"html_receive" }, "http/html_write:html_root_attribute/4": { "body": [ "html_root_attribute(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"html_root_attribute('Param1','Param2','Param3','Param4')", "prefix":"html_root_attribute" }, "http/html_write:html_set_options/1": { "body": ["html_set_options(${1:Options})$2\n$0" ], "description":" html_set_options(+Options) is det.\n\n Set options for the HTML output. Options are stored in prolog\n flags to ensure proper multi-threaded behaviour where setting an\n option is local to the thread and new threads start with the\n options from the parent thread. Defined options are:\n\n * dialect(Dialect)\n One of =html4=, =xhtml= or =html5= (default). For\n compatibility reasons, =html= is accepted as an\n alias for =html4=.\n\n * doctype(+DocType)\n Set the =|<|DOCTYPE|= DocType =|>|= line for page//1 and\n page//2.\n\n * content_type(+ContentType)\n Set the =|Content-type|= for reply_html_page/3\n\n Note that the doctype and content_type flags are covered by\n distinct prolog flags: =html4_doctype=, =xhtml_doctype= and\n =html5_doctype= and similar for the content type. The Dialect\n must be switched before doctype and content type.", "prefix":"html_set_options" }, "http/html_write:page/1": { "body":"page(${1:Contents})$2\n$0", "description":"page(:Contents)//.\nThis version of the page/[1,2] only gives you the SGML DOCTYPE and the HTML element. Contents is used to generate both the head and body of the page.", "prefix":"page" }, "http/html_write:page/2": { "body":"page(${1:HeadContent}, ${2:BodyContent})$3\n$0", "description":"page(:HeadContent, :BodyContent)//.\nThe DCG non-terminal page/4 generated a complete page, including the SGML DOCTYPE declaration. HeadContent are elements to be placed in the head element and BodyContent are elements to be placed in the body element. To achieve common style (background, page header and footer), it is possible to define DCG non-terminals head/3 and/or body/3. Non-terminal page/3 checks for the definition of these non-terminals in the module it is called from as well as in the user module. If no definition is found, it creates a head with only the HeadContent (note that the title is obligatory) and a body with bgcolor set to white and the provided BodyContent. \n\nNote that further customisation is easily achieved using html/3 directly as page/4 is (besides handling the hooks) defined as: \n\n\n\npage(Head, Body) -->\n html([ \\['\\n'],\n html([ head(Head),\n body(bgcolor(white), Body)\n ])\n ]).\n\n ", "prefix":"page" }, "http/html_write:page/3": { "body": ["page(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"page('Param1','Param2','Param3')", "prefix":"page" }, "http/html_write:page/4": { "body": [ "page(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"page('Param1','Param2','Param3','Param4')", "prefix":"page" }, "http/html_write:page/5": { "body": [ "page(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'}, ${5:'Param5'})$6\n$0" ], "description":"page('Param1','Param2','Param3','Param4','Param5')", "prefix":"page" }, "http/html_write:print_html/1": { "body":"print_html(${1:List})$2\n$0", "description":"print_html(+List).\nPrint the token list to the Prolog current output stream.", "prefix":"print_html" }, "http/html_write:print_html/2": { "body":"print_html(${1:Stream}, ${2:List})$3\n$0", "description":"print_html(+Stream, +List).\nPrint the token list to the specified output stream", "prefix":"print_html" }, "http/html_write:reply_html_page/2": { "body":"reply_html_page(${1:Head}, ${2:Body})$3\n$0", "description":"reply_html_page(:Head, :Body).\nSame as reply_html_page(default, Head, Body).", "prefix":"reply_html_page" }, "http/html_write:reply_html_page/3": { "body":"reply_html_page(${1:Style}, ${2:Head}, ${3:Body})$4\n$0", "description":"reply_html_page(+Style, :Head, :Body).\nWrites an HTML page preceded by an HTTP header as required by library(http_wrapper) (CGI-style). Here is a simple typical example: \n\nreply(Request) :-\n reply_html_page(title('Welcome'),\n [ h1('Welcome'),\n p('Welcome to our ...')\n ]).\n\n The header and footer of the page can be hooked using the grammar-rules user:head//2 and user:body//2. The first argument passed to these hooks is the Style argument of reply_html_page/3 and the second is the 2nd (for head/4) or 3rd (for body/4) argument of reply_html_page/3. These hooks can be used to restyle the page, typically by embedding the real body content in a div. E.g., the following code provides a menu on top of each page of that is identified using the style myapp. \n\n\n\n:- multifile\n user:body//2.\n\nuser:body(myapp, Body) -->\n html(body([ div(id(top), \\application_menu),\n div(id(content), Body)\n ])).\n\n Redefining the head can be used to pull in scripts, but typically html_requires/3 provides a more modular approach for pulling scripts and CSS-files.\n\n", "prefix":"reply_html_page" }, "http/html_write:xhtml_ns/2": { "body":"xhtml_ns(${1:Id}, ${2:Value})$3\n$0", "description":"xhtml_ns(Id, Value)//.\nDemand an xmlns:id=Value in the outer html tag. This uses the html_post/2 mechanism to post to the xmlns channel. Rdfa (http://www.w3.org/2006/07/SWD/RDFa/syntax/), embedding RDF in (x)html provides a typical usage scenario where we want to publish the required namespaces in the header. We can define: \n\nrdf_ns(Id) -->\n { rdf_global_id(Id:'', Value) },\n xhtml_ns(Id, Value).\n\n After which we can use rdf_ns/3 as a normal rule in html/3 to publish namespaces from library(semweb/rdf_db). Note that this macro only has effect if the dialect is set to xhtml. In html mode it is silently ignored. \n\nThe required xmlns receiver is installed by html_begin/3 using the html tag and thus is present in any document that opens the outer html environment through this library.\n\n", "prefix":"xhtml_ns" }, "http/html_write:xhtml_ns/4": { "body": [ "xhtml_ns(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"xhtml_ns('Param1','Param2','Param3','Param4')", "prefix":"xhtml_ns" }, "http/http_authenticate:cors_enable/2": { "body":"cors_enable(${1:Request}, ${2:Options})$3\n$0", "description":"[det]cors_enable(+Request, +Options).\nCORS reply to a Preflight OPTIONS request. Request is the HTTP request. Options provides: methods(+List): List of supported HTTP methods. The default is GET, only allowing for read requests.\n\nheaders(+List): List of headers the client asks for and we allow. The default is to simply echo what has been requested for.\n\n Both methods and headers may use Prolog friendly syntax, e.g., get for a method and content_type for a header. \n\nSee also: http://www.html5rocks.com/en/tutorials/cors/\n\n ", "prefix":"cors_enable" }, "http/http_authenticate:http_authenticate/3": { "body":"http_authenticate(${1:Type}, ${2:Request}, ${3:Fields})$4\n$0", "description":"http_authenticate(+Type, +Request, -Fields).\nTrue if Request contains the information to continue according to Type. Type identifies the required authentication technique: basic(+PasswordFile): Use HTTP Basic authetication and verify the password from PasswordFile. PasswordFile is a file holding usernames and passwords in a format compatible to Unix and Apache. Each line is record with : separated fields. The first field is the username and the second the password hash. Password hashes are validated using crypt/2.\n\n Successful authorization is cached for 60 seconds to avoid overhead of decoding and lookup of the user and password data. \n\nhttp_authenticate/3 just validates the header. If authorization is not provided the browser must be challenged, in response to which it normally opens a user-password dialogue. Example code realising this is below. The exception causes the HTTP wrapper code to generate an HTTP 401 reply. \n\n\n\n( http_authenticate(basic(passwd), Request, Fields)\n-> true\n; throw(http_reply(authorise(basic, Realm)))\n).\n\n Fields is a list of fields from the password-file entry. The first element is the user. The hash is skipped. To be done: Should we also cache failures to reduce the risc of DoS attacks?\n\n ", "prefix":"http_authenticate" }, "http/http_authenticate:http_authorization_data/2": { "body":"http_authorization_data(${1:AuthorizeText}, ${2:Data})$3\n$0", "description":"[semidet]http_authorization_data(+AuthorizeText, ?Data).\nDecode the HTTP Authorization header. Data is a term \n\nMethod(User, Password)\n\n where Method is the (downcased) authorization method (typically basic), User is an atom holding the user name and Password is a list of codes holding the password\n\n", "prefix":"http_authorization_data" }, "http/http_authenticate:http_current_user/3": { "body":"http_current_user(${1:File}, ${2:User}, ${3:Fields})$4\n$0", "description":"[nondet]http_current_user(+File, ?User, ?Fields).\nTrue when User is present in the htpasswd file File and Fields provides the additional fields. Fields are the fields from the password file File, converted using name/2, which means that numeric values are passed as numbers and other fields as atoms. The password hash is the first element of Fields and is a string. ", "prefix":"http_current_user" }, "http/http_authenticate:http_read_passwd_file/2": { "body":"http_read_passwd_file(${1:Path}, ${2:Data})$3\n$0", "description":"[det]http_read_passwd_file(+Path, -Data).\nRead a password file. Data is a list of terms of the format below, where User is an atom identifying the user, Hash is a string containing the salted password hash and Fields contain additional fields. The string value of each field is converted using name/2 to either a number or an atom. \n\npasswd(User, Hash, Fields)\n\n ", "prefix":"http_read_passwd_file" }, "http/http_authenticate:http_write_passwd_file/2": { "body":"http_write_passwd_file(${1:File}, ${2:Data})$3\n$0", "description":"[det]http_write_passwd_file(+File, +Data:list).\nWrite password data Data to File. Data is a list of entries as below. See http_read_passwd_file/2 for details. \n\npasswd(User, Hash, Fields)\n\n To be done: Write to a new file and atomically replace the old one.\n\n ", "prefix":"http_write_passwd_file" }, "http/http_ax:ax_form_attributes/2": { "body": ["ax_form_attributes(${1:Form}, ${2:Values})$3\n$0" ], "description":" ax_form_attributes(+Form, -Values) is det.\n\n True if Values is a list Alias(Value) for each exchanged\n attribute.\n\n Note that we assume we get the same alias names as we used for\n requesting the data. Not sure whether this is true.\n\n @arg Form is an HTTP form as returned using the form(Form)\n option of http_parameters/3.", "prefix":"ax_form_attributes" }, "http/http_ax:http_ax_attributes/2": { "body": ["http_ax_attributes(${1:Spec}, ${2:HTTPAttributes})$3\n$0" ], "description":" http_ax_attributes(+Spec, -HTTPAttributes) is det.\n\n True when HTTPAttributes is a list of Name=Value pairs that can\n be used with an HTTP request to query for the attributes Spec.\n Spec is a list of elements =|Alias(Value[, Options])|=. Options\n include:\n\n - required\n The attribute is required. This is mutually exclusive\n with =if_available=.\n - if_available\n Only provide the attribute if it is available. This is\n mutually exclusive with =required=. This is the default.\n - url(+URL)\n Can be used to ovcerrule or extend the ax_alias/2.\n - count(+Count)\n Maximum number of values to provide\n\n For example:\n\n ==\n ?- http_ax_attributes([ nickname(Nick),\n email(Email, [required])\n ], Params).\n Params = [ 'openid.ax.mode' = fetch_request,\n 'openid.ax.type.nickname' = 'http://axschema.org/namePerson/friendly',\n 'openid.ax.type.email' = 'http://axschema.org/contact/email',\n 'openid.ax.required' = email,\n 'openid.ax.if_available' = nickname\n ].\n ==", "prefix":"http_ax_attributes" }, "http/http_chunked:http_chunked_open/3": { "body":"http_chunked_open(${1:RawStream}, ${2:DataStream}, ${3:Options})$4\n$0", "description":"http_chunked_open(+RawStream, -DataStream, +Options).\nCreate a stream to realise HTTP chunked encoding or decoding. The technique is similar to library(zlib), using a Prolog stream as a filter on another stream. See online documentation at http://www.swi-prolog.org/ for details.", "prefix":"http_chunked_open" }, "http/http_chunked:reply_pwp_page/3": { "body":"reply_pwp_page(${1:File}, ${2:Options}, ${3:Request})$4\n$0", "description":"reply_pwp_page(:File, +Options, +Request).\nReply a PWP file. This interface is provided to server individual locations from PWP files. Using a PWP file rather than generating the page from Prolog may be desirable because the page contains a lot of text (which is cumbersome to generate from Prolog) or because the maintainer is not familiar with Prolog. Options supported are: \n\nmime_type(+Type): Serve the file using the given mime-type. Default is text/html.\n\nunsafe(+Boolean): Passed to http_safe_file/2 to check for unsafe paths.\n\npwp_module(+Boolean): If true, (default false), process the PWP file in a module constructed from its canonical absolute path. Otherwise, the PWP file is processed in the calling module.\n\n Initial context: \n\nSCRIPT_NAME: Virtual path of the script.\n\nSCRIPT_DIRECTORY: Physical directory where the script lives\n\nQUERY: Var=Value list representing the query-parameters\n\nREMOTE_USER: If access has been authenticated, this is the authenticated user.\n\nREQUEST_METHOD: One of get, post, put or head\n\nCONTENT_TYPE: Content-type provided with HTTP POST and PUT requests\n\nCONTENT_LENGTH: Content-length provided with HTTP POST and PUT requests\n\n While processing the script, the file-search-path pwp includes the current location of the script. I.e., the following will find myprolog in the same directory as where the PWP file resides. \n\n\n\npwp:ask=\"ensure_loaded(pwp(myprolog))\"\n\n See also: pwp_handler/2.\n\nTo be done: complete the initial context, as far as possible from CGI variables. See http://hoohoo.ncsa.illinois.edu/docs/cgi/env.html\n\n ", "prefix":"reply_pwp_page" }, "http/http_client:http_convert_data/4": { "body":"http_convert_data(${1:In}, ${2:Fields}, ${3:Data}, ${4:Options})$5\n$0", "description":"[semidet,multifile]http_convert_data(+In, +Fields, -Data, +Options).\nMulti-file hook to convert a HTTP payload according to the Content-Type header. The default implementation deals with application/x-prolog. The HTTP framework provides implementations for JSON (library(http/http_json)), HTML/XML (library(http/http_sgml_plugin))", "prefix":"http_convert_data" }, "http/http_client:http_delete/3": { "body":"http_delete(${1:URL}, ${2:Data}, ${3:Options})$4\n$0", "description":"[det]http_delete(+URL, -Data, +Options).\nExecute a DELETE method on the server. Arguments are the same as for http_get/3. Typically one should pass the option status_code(-Code) to assess and evaluate the returned status code. Without, codes other than 200 are interpreted as an error. See also: Implemented on top of http_get/3.\n\nTo be done: Properly map the 201, 202 and 204 replies.\n\n ", "prefix":"http_delete" }, "http/http_client:http_disconnect/1": { "body":"http_disconnect(${1:Connections})$2\n$0", "description":"[det]http_disconnect(+Connections).\nClose down some connections. Currently Connections must have the value all, closing all connections. deprecated: New code should use http_close_keep_alive/1 from library(http/http_open).\n\n ", "prefix":"http_disconnect" }, "http/http_client:http_get/3": { "body":"http_get(${1:URL}, ${2:Data}, ${3:Options})$4\n$0", "description":"[det]http_get(+URL, -Data, +Options).\nGet data from a URL server and convert it to a suitable Prolog representation based on the Content-Type header and plugins. This predicate is the common implementation of the HTTP client operations. The predicates http_delete/3, http_post/4 and http_put/4 call this predicate with an appropriate method(+Method) option and ---for http_post/4 and http_put/4--- a post(+Data) option. Options are passed to http_open/3 and http_read_data/3. Other options: \n\nreply_header(-Fields): Synonym for headers(Fields) from http_open/3. Provided for backward compatibility. Note that http_version(Major-Minor) is missing in the new version.\n\n ", "prefix":"http_get" }, "http/http_client:http_patch/4": { "body":"http_patch(${1:URL}, ${2:Data}, ${3:Reply}, ${4:Options})$5\n$0", "description":"http_patch(+URL, +Data, -Reply, +Options).\nIssue an HTTP PATCH request. Arguments are the same as for http_post/4. See also: Implemented on top of http_post/4.\n\n ", "prefix":"http_patch" }, "http/http_client:http_post/4": { "body":"http_post(${1:URL}, ${2:Data}, ${3:Reply}, ${4:Options})$5\n$0", "description":"[det]http_post(+URL, +Data, -Reply, +Options).\nIssue an HTTP POST request. Data is posted using http_post_data/3. The HTTP server reply is returned in Reply, using the same rules as for http_get/3. See also: Implemented on top of http_get/3.\n\n ", "prefix":"http_post" }, "http/http_client:http_put/4": { "body":"http_put(${1:URL}, ${2:Data}, ${3:Reply}, ${4:Options})$5\n$0", "description":"http_put(+URL, +Data, -Reply, +Options).\nIssue an HTTP PUT request. Arguments are the same as for http_post/4. See also: Implemented on top of http_post/4.\n\n ", "prefix":"http_put" }, "http/http_client:http_read_data/3": { "body":"http_read_data(${1:Request}, ${2:Data}, ${3:Options})$4\n$0", "description":"[det]http_read_data(+Request, -Data, +Options).\nRead data from an HTTP connection and convert it according to the supplied to(Format) option or based on the Content-type in the Request. The following options are supported: to(Format): Convert data into Format. Values are: stream(+WriteStream)) Append the content of the message to Stream\natom Return the reply as an atom\nstring Return the reply as a string\ncodes Return the reply as a list of codes\n\n\n\nform_data(AsForm): input_encoding(+Encoding): on_filename(:CallBack): These options are implemented by the plugin library(http/http_multipart_plugin) and apply to processing multipart/form-data content.\n\ncontent_type(+Type): Overrule the content-type that is part of Request as a work-around for wrongly configured servers.\n\n Without plugins, this predicate handles \n\napplication/x-www-form-urlencoded: Converts form-data into a list of Name=Value terms.\n\napplication/x-prolog: Converts data into a Prolog term.\n\n Request is a parsed HTTP request as returned by http_read_request/2 or available from the HTTP server's request dispatcher. Request must contain a term input(In) that provides the input stream from the HTTP server. ", "prefix":"http_read_data" }, "http/http_cookie:cookie_current_cookie/4": { "body": [ "cookie_current_cookie(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"cookie_current_cookie('Param1','Param2','Param3','Param4')", "prefix":"cookie_current_cookie" }, "http/http_cookie:cookie_remove_all_clients/0": { "body": ["cookie_remove_all_clients$1\n$0" ], "description":" cookie_remove_all_clients is det.\n\n Simply logout all clients. See http_remove_client/1.", "prefix":"cookie_remove_all_clients" }, "http/http_cookie:cookie_remove_client/1": { "body": ["cookie_remove_client(${1:ClientId})$2\n$0" ], "description":" cookie_remove_client(+ClientId) is det.\n\n Fake user quitting a browser. Removes all cookies that do\n not have an expire date.", "prefix":"cookie_remove_client" }, "http/http_cors:cors_enable/0": { "body": ["cors_enable$1\n$0" ], "description":" cors_enable is det.\n\n Emit the HTTP header =|Access-Control-Allow-Origin|= using\n domains from the setting http:cors. This this setting is []\n (default), nothing is written. This predicate is typically used\n for replying to API HTTP-request (e.g., replies to an AJAX\n request that typically serve JSON or XML).", "prefix":"cors_enable" }, "http/http_cors:cors_enable/2": { "body":"cors_enable(${1:Request}, ${2:Options})$3\n$0", "description":"[det]cors_enable(+Request, +Options).\nCORS reply to a Preflight OPTIONS request. Request is the HTTP request. Options provides: methods(+List): List of supported HTTP methods. The default is GET, only allowing for read requests.\n\nheaders(+List): List of headers the client asks for and we allow. The default is to simply echo what has been requested for.\n\n Both methods and headers may use Prolog friendly syntax, e.g., get for a method and content_type for a header. \n\nSee also: http://www.html5rocks.com/en/tutorials/cors/\n\n ", "prefix":"cors_enable" }, "http/http_cors:http_session_cookie/1": { "body":"http_session_cookie(${1:Cookie})$2\n$0", "description":"[det]http_session_cookie(-Cookie).\nGenerate a random cookie that can be used by a browser to identify the current session. The cookie has the format XXXX-XXXX-XXXX-XXXX[.], where XXXX are random hexadecimal numbers and [.] is the optionally added routing information.", "prefix":"http_session_cookie" }, "http/http_dcg_basics:alpha_to_lower/3": { "body": ["alpha_to_lower(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"alpha_to_lower('Param1','Param2','Param3')", "prefix":"alpha_to_lower" }, "http/http_dcg_basics:atom/3": { "body": ["atom(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"atom('Param1','Param2','Param3')", "prefix":"atom" }, "http/http_dcg_basics:blank/2": { "body": ["blank(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"blank('Param1','Param2')", "prefix":"blank" }, "http/http_dcg_basics:blanks/2": { "body": ["blanks(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"blanks('Param1','Param2')", "prefix":"blanks" }, "http/http_dcg_basics:blanks_to_nl/2": { "body": ["blanks_to_nl(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"blanks_to_nl('Param1','Param2')", "prefix":"blanks_to_nl" }, "http/http_dcg_basics:digit/3": { "body": ["digit(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"digit('Param1','Param2','Param3')", "prefix":"digit" }, "http/http_dcg_basics:digits/3": { "body": ["digits(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"digits('Param1','Param2','Param3')", "prefix":"digits" }, "http/http_dcg_basics:eos/2": { "body": ["eos(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"eos('Param1','Param2')", "prefix":"eos" }, "http/http_dcg_basics:float/3": { "body": ["float(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"float('Param1','Param2','Param3')", "prefix":"float" }, "http/http_dcg_basics:integer/3": { "body": ["integer(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"integer('Param1','Param2','Param3')", "prefix":"integer" }, "http/http_dcg_basics:nonblank/3": { "body": ["nonblank(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"nonblank('Param1','Param2','Param3')", "prefix":"nonblank" }, "http/http_dcg_basics:nonblanks/3": { "body": ["nonblanks(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"nonblanks('Param1','Param2','Param3')", "prefix":"nonblanks" }, "http/http_dcg_basics:number/3": { "body": ["number(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"number('Param1','Param2','Param3')", "prefix":"number" }, "http/http_dcg_basics:prolog_var_name/3": { "body": [ "prolog_var_name(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"prolog_var_name('Param1','Param2','Param3')", "prefix":"prolog_var_name" }, "http/http_dcg_basics:remainder/3": { "body": ["remainder(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"remainder('Param1','Param2','Param3')", "prefix":"remainder" }, "http/http_dcg_basics:string/3": { "body": ["string(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"string('Param1','Param2','Param3')", "prefix":"string" }, "http/http_dcg_basics:string_without/4": { "body": [ "string_without(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'}, ${4:'Param4'})$5\n$0" ], "description":"string_without('Param1','Param2','Param3','Param4')", "prefix":"string_without" }, "http/http_dcg_basics:white/2": { "body": ["white(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"white('Param1','Param2')", "prefix":"white" }, "http/http_dcg_basics:whites/2": { "body": ["whites(${1:'Param1'}, ${2:'Param2'})$3\n$0" ], "description":"whites('Param1','Param2')", "prefix":"whites" }, "http/http_dcg_basics:xdigit/3": { "body": ["xdigit(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"xdigit('Param1','Param2','Param3')", "prefix":"xdigit" }, "http/http_dcg_basics:xdigits/3": { "body": ["xdigits(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"xdigits('Param1','Param2','Param3')", "prefix":"xdigits" }, "http/http_dcg_basics:xinteger/3": { "body": ["xinteger(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"xinteger('Param1','Param2','Param3')", "prefix":"xinteger" }, "http/http_digest:http_digest_challenge/2": { "body":"http_digest_challenge(${1:Realm}, ${2:Options})$3\n$0", "description":"http_digest_challenge(+Realm, +Options)//.\nGenerate the content for a 401 WWW-Authenticate: Digest header field.", "prefix":"http_digest_challenge" }, "http/http_digest:http_digest_password_hash/4": { "body":"http_digest_password_hash(${1:User}, ${2:Realm}, ${3:Password}, ${4:Hash})$5\n$0", "description":"[det]http_digest_password_hash(+User, +Realm, +Password, -Hash).\nCompute the password hash for the HTTP password file. Note that the HTTP digest mechanism does allow us to use a seeded expensive arbitrary hash function. Instead, the hash is defined as the MD5 of the following components: \n\n::.\n\n The inexpensive MD5 algorithm makes the hash sensitive to brute force attacks while the lack of seeding make the hashes sensitive for rainbow table attacks, although the value is somewhat limited because the realm and user are part of the hash.\n\n", "prefix":"http_digest_password_hash" }, "http/http_digest:http_digest_response/5": { "body":"http_digest_response(${1:Challenge}, ${2:User}, ${3:Password}, ${4:Reply}, ${5:Options})$6\n$0", "description":"http_digest_response(+Challenge, +User, +Password, -Reply, +Options).\nFormulate a reply to a digest authentication request. Options: path(+Path): The request URI send along with the authentication. Defaults to /\n\nmethod(+Method): The HTTP method. Defaults to 'GET'\n\nnc(+Integer): The nonce-count as an integer. This is formatted as an 8 hex-digit string.\n\n Challenge is a list Name(Value), normally from http_parse_digest_challenge/2. Must contain realm and nonce. Optionally contains opaque. User is the user we want to authenticated Password is the user's password Options provides additional options ", "prefix":"http_digest_response" }, "http/http_digest:http_parse_digest_challenge/2": { "body":"http_parse_digest_challenge(${1:Challenge}, ${2:Fields})$3\n$0", "description":"[det]http_parse_digest_challenge(+Challenge, -Fields).\nParse the value of an HTTP WWW-Authenticate header into a list of Name(Value) terms.", "prefix":"http_parse_digest_challenge" }, "http/http_dirindex:directory_index/2": { "body":"directory_index(${1:Dir}, ${2:Options})$3\n$0", "description":"[det]directory_index(+Dir, +Options)//.\nShow index for a directory. Options processed: order_by(+Field): Sort the files in the directory listing by Field. Field is one of name (default), size or time.\n\norder(+AscentDescent): Sorting order. Default is ascending. The altenative is descending\n\n ", "prefix":"directory_index" }, "http/http_dirindex:http_reply_dirindex/3": { "body":"http_reply_dirindex(${1:DirSpec}, ${2:Options}, ${3:Request})$4\n$0", "description":"[det]http_reply_dirindex(+DirSpec, +Options, +Request).\nProvide a directory listing for Request, assuming it is an index for the physical directrory Dir. If the request-path does not end with /, first return a moved (301 Moved Permanently) reply. The calling conventions allows for direct calling from http_handler/3.\n\n", "prefix":"http_reply_dirindex" }, "http/http_dirindex:http_switch_protocol/2": { "body":"http_switch_protocol(${1:Goal}, ${2:Options})$3\n$0", "description":"http_switch_protocol(:Goal, +Options).\nSend an \"HTTP 101 Switching Protocols\" reply. After sending the reply, the HTTP library calls call(Goal, InStream, OutStream), where InStream and OutStream are the raw streams to the HTTP client. This allows the communication to continue using an an alternative protocol. If Goal fails or throws an exception, the streams are closed by the server. Otherwise Goal is responsible for closing the streams. Note that Goal runs in the HTTP handler thread. Typically, the handler should be registered using the spawn option if http_handler/3 or Goal must call thread_create/3 to allow the HTTP worker to return to the worker pool. \n\nThe streams use binary (octet) encoding and have their I/O timeout set to the server timeout (default 60 seconds). The predicate set_stream/2 can be used to change the encoding, change or cancel the timeout. \n\nThis predicate interacts with the server library by throwing an exception. \n\nThe following options are supported: \n\nheader(+Headers): Backward compatible. Use headers(+Headers).\n\nheaders(+Headers): Additional headers send with the reply. Each header takes the form Name(Value).\n\n ", "prefix":"http_switch_protocol" }, "http/http_dispatch:http_404/2": { "body":"http_404(${1:Options}, ${2:Request})$3\n$0", "description":"[det]http_404(+Options, +Request).\nReply using an \"HTTP 404 not found\" page. This handler is intended as fallback handler for prefix handlers. Options processed are: index(Location): If there is no path-info, redirect the request to Location using http_redirect/3.\n\n Errors: http_reply(not_found(Path))\n\n ", "prefix":"http_404" }, "http/http_dispatch:http_current_handler/2": { "body":"http_current_handler(${1:Location}, ${2:Closure})$3\n$0", "description":"[nondet]http_current_handler(-Location, :Closure).\nTrue if Location is handled by Closure.", "prefix":"http_current_handler" }, "http/http_dispatch:http_current_handler/3": { "body":"http_current_handler(${1:Location}, ${2:Closure}, ${3:Options})$4\n$0", "description":"[nondet]http_current_handler(?Location, :Closure, ?Options).\nResolve the current handler and options to execute it.", "prefix":"http_current_handler" }, "http/http_dispatch:http_delete_handler/1": { "body":"http_delete_handler(${1:Spec})$2\n$0", "description":"[det]http_delete_handler(+Spec).\nDelete handler for Spec. Typically, this should only be used for handlers that are registered dynamically. Spec is one of: id(Id): Delete a handler with the given id. The default id is the handler-predicate-name.\n\npath(Path): Delete handler that serves the given path.\n\n ", "prefix":"http_delete_handler" }, "http/http_dispatch:http_dispatch/1": { "body":"http_dispatch(${1:Request})$2\n$0", "description":"[det]http_dispatch(Request).\nDispatch a Request using http_handler/3 registrations.", "prefix":"http_dispatch" }, "http/http_dispatch:http_handler/3": { "body":"http_handler(${1:Path}, ${2:Closure}, ${3:Options})$4\n$0", "description":"[det]http_handler(+Path, :Closure, +Options).\nRegister Closure as a handler for HTTP requests. Path is a specification as provided by http_path.pl. If an HTTP request arrives at the server that matches Path, Closure is called with one extra argument: the parsed HTTP request. Options is a list containing the following options: authentication(+Type): Demand authentication. Authentication methods are pluggable. The library http_authenticate.pl provides a plugin for user/password based Basic HTTP authentication.\n\nchunked: Use Transfer-encoding: chunked if the client allows for it.\n\ncontent_type(+Term): Specifies the content-type of the reply. This value is currently not used by this library. It enhances the reflexive capabilities of this library through http_current_handler/3.\n\nid(+Term): Identifier of the handler. The default identifier is the predicate name. Used by http_location_by_id/2.\n\nhide_children(+Bool): If true on a prefix-handler (see prefix), possible children are masked. This can be used to (temporary) overrule part of the tree.\n\nmethod(+Method): Declare that the handler processes Method. This is equivalent to methods([Method]). Using method(*) allows for all methods.\n\nmethods(+ListOfMethods): Declare that the handler processes all of the given methods. If this option appears multiple times, the methods are combined.\n\nprefix: Call Pred on any location that is a specialisation of Path. If multiple handlers match, the one with the longest path is used. Options defined with a prefix handler are the default options for paths that start with this prefix. Note that the handler acts as a fallback handler for the tree below it: \n\n:- http_handler(/, http_404([index('index.html')]),\n [spawn(my_pool),prefix]).\n\n \n\npriority(+Integer): If two handlers handle the same path, the one with the highest priority is used. If equal, the last registered is used. Please be aware that the order of clauses in multifile predicates can change due to reloading files. The default priority is 0 (zero).\n\nspawn(+SpawnOptions): Run the handler in a seperate thread. If SpawnOptions is an atom, it is interpreted as a thread pool name (see create_thread_pool/3). Otherwise the options are passed to http_spawn/2 and from there to thread_create/3. These options are typically used to set the stack limits.\n\ntime_limit(+Spec): One of infinite, default or a positive number (seconds). If default, the value from the setting http:time_limit is taken. The default of this setting is 300 (5 minutes). See setting/2.\n\n Note that http_handler/3 is normally invoked as a directive and processed using term-expansion. Using term-expansion ensures proper update through make/0 when the specification is modified. We do not expand when the cross-referencer is running to ensure proper handling of the meta-call. \n\nErrors: existence_error(http_location, Location)\n\nSee also: http_reply_file/3 and http_redirect/3 are generic handlers to serve files and achieve redirects.\n\n ", "prefix":"http_handler" }, "http/http_dispatch:http_link_to_id/3": { "body":"http_link_to_id(${1:HandleID}, ${2:Parameters}, ${3:HREF})$4\n$0", "description":"http_link_to_id(+HandleID, +Parameters, -HREF).\nHREF is a link on the local server to a handler with given ID, passing the given Parameters. This predicate is typically used to formulate a HREF that resolves to a handler implementing a particular predicate. The code below provides a typical example. The predicate user_details/1 returns a page with details about a user from a given id. This predicate is registered as a handler. The DCG user_link/3 renders a link to a user, displaying the name and calling user_details/1 when clicked. Note that the location (root(user_details)) is irrelevant in this equation and HTTP locations can thus be moved freely without breaking this code fragment. \n\n:- http_handler(root(user_details), user_details, []).\n\nuser_details(Request) :-\n http_parameters(Request,\n [ user_id(ID)\n ]),\n ...\n\nuser_link(ID) -->\n { user_name(ID, Name),\n http_link_to_id(user_details, [id(ID)], HREF)\n },\n html(a([class(user), href(HREF)], Name)).\n\n Parameters is one of \n\npath_postfix(File) to pass a single value as the last segment of the HTTP location (path). This way of passing a parameter is commonly used in REST APIs.\nA list of search parameters for a GET request.\n\n \n\n See also: http_location_by_id/2 and http_handler/3 for defining and specifying handler IDs.\n\n ", "prefix":"http_link_to_id" }, "http/http_dispatch:http_location_by_id/2": { "body":"http_location_by_id(${1:ID}, ${2:Location})$3\n$0", "description":"[det]http_location_by_id(+ID, -Location).\nFind the HTTP Location of handler with ID. If the setting (see setting/2) http:prefix is active, Location is the handler location prefixed with the prefix setting. Handler IDs can be specified in two ways: id(ID): If this appears in the option list of the handler, this it is used and takes preference over using the predicate.\n\nM : PredName: The module-qualified name of the predicate.\n\nPredName: The unqualified name of the predicate.\n\n Errors: existence_error(http_handler_id, Id).\n\ndeprecated: The predicate http_link_to_id/3 provides the same functionality with the option to add query parameters or a path parameter.\n\n ", "prefix":"http_location_by_id" }, "http/http_dispatch:http_redirect/3": { "body":"http_redirect(${1:How}, ${2:To}, ${3:Request})$4\n$0", "description":"[det]http_redirect(+How, +To, +Request).\nRedirect to a new location. The argument order, using the Request as last argument, allows for calling this directly from the handler declaration: \n\n:- http_handler(root(.),\n http_redirect(moved, myapp('index.html')),\n []).\n\n How is one of moved, moved_temporary or see_other To is an atom, a aliased path as defined by http_absolute_location/3. or a term location_by_id(Id). If To is not absolute, it is resolved relative to the current location. ", "prefix":"http_redirect" }, "http/http_dispatch:http_reload_with_parameters/3": { "body":"http_reload_with_parameters(${1:Request}, ${2:Parameters}, ${3:HREF})$4\n$0", "description":"[det]http_reload_with_parameters(+Request, +Parameters, -HREF).\nCreate a request on the current handler with replaced search parameters.", "prefix":"http_reload_with_parameters" }, "http/http_dispatch:http_reply_file/3": { "body":"http_reply_file(${1:FileSpec}, ${2:Options}, ${3:Request})$4\n$0", "description":"[det]http_reply_file(+FileSpec, +Options, +Request).\nOptions is a list of cache(+Boolean): If true (default), handle If-modified-since and send modification time.\n\nmime_type(+Type): Overrule mime-type guessing from the filename as provided by file_mime_type/2.\n\nstatic_gzip(+Boolean): If true (default false) and, in addition to the plain file, there is a .gz file that is not older than the plain file and the client acceps gzip encoding, send the compressed file with Transfer-encoding: gzip.\n\nunsafe(+Boolean): If false (default), validate that FileSpec does not contain references to parent directories. E.g., specifications such as www('../../etc/passwd') are not allowed.\n\nheaders(+List): Provides additional reply-header fields, encoded as a list of Field(Value).\n\n If caching is not disabled, it processes the request headers If-modified-since and Range. \n\nthrows: - http_reply(not_modified) - http_reply(file(MimeType, Path))\n\n ", "prefix":"http_reply_file" }, "http/http_dispatch:http_safe_file/2": { "body":"http_safe_file(${1:FileSpec}, ${2:Options})$3\n$0", "description":"[det]http_safe_file(+FileSpec, +Options).\nTrue if FileSpec is considered safe. If it is an atom, it cannot be absolute and cannot have references to parent directories. If it is of the form alias(Sub), than Sub cannot have references to parent directories. Errors: - instantiation_error - permission_error(read, file, FileSpec)\n\n ", "prefix":"http_safe_file" }, "http/http_dispatch:http_switch_protocol/2": { "body":"http_switch_protocol(${1:Goal}, ${2:Options})$3\n$0", "description":"http_switch_protocol(:Goal, +Options).\nSend an \"HTTP 101 Switching Protocols\" reply. After sending the reply, the HTTP library calls call(Goal, InStream, OutStream), where InStream and OutStream are the raw streams to the HTTP client. This allows the communication to continue using an an alternative protocol. If Goal fails or throws an exception, the streams are closed by the server. Otherwise Goal is responsible for closing the streams. Note that Goal runs in the HTTP handler thread. Typically, the handler should be registered using the spawn option if http_handler/3 or Goal must call thread_create/3 to allow the HTTP worker to return to the worker pool. \n\nThe streams use binary (octet) encoding and have their I/O timeout set to the server timeout (default 60 seconds). The predicate set_stream/2 can be used to change the encoding, change or cancel the timeout. \n\nThis predicate interacts with the server library by throwing an exception. \n\nThe following options are supported: \n\nheader(+Headers): Backward compatible. Use headers(+Headers).\n\nheaders(+Headers): Additional headers send with the reply. Each header takes the form Name(Value).\n\n ", "prefix":"http_switch_protocol" }, "http/http_exception:in_or_exclude_backtrace/2": { "body": ["in_or_exclude_backtrace(${1:ErrorIn}, ${2:ErrorOut})$3\n$0" ], "description":" in_or_exclude_backtrace(+ErrorIn, -ErrorOut)\n\n Remove the stacktrace from the exception, unless setting\n `http:client_backtrace` is `true`.", "prefix":"in_or_exclude_backtrace" }, "http/http_exception:map_exception_to_http_status/4": { "body": [ "map_exception_to_http_status(${1:Exception}, ${2:Reply}, ${3:HdrExtra}, ${4:Context})$5\n$0" ], "description":" map_exception_to_http_status(+Exception, -Reply, -HdrExtra, -Context)\n\n Map certain defined exceptions to special reply codes. The\n http(not_modified) provides backward compatibility to\n http_reply(not_modified).", "prefix":"map_exception_to_http_status" }, "http/http_files:http_reply_from_files/3": { "body":"http_reply_from_files(${1:Dir}, ${2:Options}, ${3:Request})$4\n$0", "description":"http_reply_from_files(+Dir, +Options, +Request).\nHTTP handler that serves files from the directory Dir. This handler uses http_reply_file/3 to reply plain files. If the request resolves to a directory, it uses the option indexes to locate an index file (see below) or uses http_reply_dirindex/3 to create a listing of the directory. Options: \n\nindexes(+List): List of files tried to find an index for a directory. The default is ['index.html'].\n\n Note that this handler must be tagged as a prefix handler (see http_handler/3 and module introduction). This also implies that it is possible to override more specific locations in the hierarchy using http_handler/3 with a longer path-specifier.\n\nDir is either a directory or an path-specification as used by absolute_file_name/3. This option provides great flexibility in (re-)locating the physical files and allows merging the files of multiple physical locations into one web-hierarchy by using multiple user:file_search_path/2 clauses that define the same alias. See also: The hookable predicate file_mime_type/2 is used to determine the Content-type from the file name.\n\n ", "prefix":"http_reply_from_files" }, "http/http_header:http_join_headers/3": { "body":"http_join_headers(${1:Default}, ${2:Header}, ${3:Out})$4\n$0", "description":"http_join_headers(+Default, +Header, -Out).\nAppend headers from Default to Header if they are not already part of it.", "prefix":"http_join_headers" }, "http/http_header:http_parse_header/2": { "body":"http_parse_header(${1:Text}, ${2:Header})$3\n$0", "description":"[det]http_parse_header(+Text:codes, -Header:list).\nHeader is a list of Name(Value)-terms representing the structure of the HTTP header in Text. Errors: domain_error(http_request_line, Line)\n\n ", "prefix":"http_parse_header" }, "http/http_header:http_parse_header_value/3": { "body":"http_parse_header_value(${1:Field}, ${2:Value}, ${3:Prolog})$4\n$0", "description":"[semidet]http_parse_header_value(+Field, +Value, -Prolog).\nTranslate Value in a meaningful Prolog term. Field denotes the HTTP request field for which we do the translation. Supported fields are: content_length: Converted into an integer\n\ncookie: Converted into a list with Name=Value by cookies/3.\n\nset_cookie: Converted into a term set_cookie(Name, Value, Options). Options is a list consisting of Name=Value or a single atom (e.g., secure)\n\nhost: Converted to HostName:Port if applicable.\n\nrange: Converted into bytes(From, To), where From is an integer and To is either an integer or the atom end.\n\naccept: Parsed to a list of media descriptions. Each media is a term media(Type, TypeParams, Quality, AcceptExts). The list is sorted according to preference.\n\ncontent_disposition: Parsed into disposition(Name, Attributes), where Attributes is a list of Name=Value pairs.\n\ncontent_type: Parsed into media(Type/SubType, Attributes), where Attributes is a list of Name=Value pairs.\n\n ", "prefix":"http_parse_header_value" }, "http/http_header:http_post_data/3": { "body":"http_post_data(${1:Data}, ${2:Out}, ${3:HdrExtra})$4\n$0", "description":"[det]http_post_data(+Data, +Out:stream, +HdrExtra).\nSend data on behalf on an HTTP POST request. This predicate is normally called by http_post/4 from http_client.pl to send the POST data to the server. Data is one of: \n\nhtml(+Tokens) Result of html/3 from html_write.pl\nxml(+Term) Post the result of xml_write/3 using the Mime-type text/xml\nxml(+Type, +Term) Post the result of xml_write/3 using the given Mime-type and an empty option list to xml_write/3.\nxml(+Type, +Term, +Options) Post the result of xml_write/3 using the given Mime-type and option list for xml_write/3.\nfile(+File) Send contents of a file. Mime-type is determined by file_mime_type/2.\nfile(+Type, +File) Send file with content of indicated mime-type.\nmemory_file(+Type, +Handle) Similar to file(+Type, +File), but using a memory file instead of a real file. See new_memory_file/1.\ncodes(+Codes) As codes(text/plain, Codes).\ncodes(+Type, +Codes) Send Codes using the indicated MIME-type.\nbytes(+Type, +Bytes) Send Bytes using the indicated MIME-type. Bytes is either a string of character codes 0..255 or list of integers in the range 0..255. Out-of-bound codes result in a representation error exception.\natom(+Atom) As atom(text/plain, Atom).\natom(+Type, +Atom) Send Atom using the indicated MIME-type.\ncgi_stream(+Stream, +Len) Read the input from Stream which, like CGI data starts with a partial HTTP header. The fields of this header are merged with the provided HdrExtra fields. The first Len characters of Stream are used.\nform(+ListOfParameter) Send data of the MIME type application/x-www-form-urlencoded as produced by browsers issuing a POST request from an HTML form. ListOfParameter is a list of Name=Value or Name(Value).\nform_data(+ListOfData) Send data of the MIME type multipart/form-data as produced by browsers issuing a POST request from an HTML form using enctype multipart/form-data. ListOfData is the same as for the List alternative described below. Below is an example. Repository, etc. are atoms providing the value, while the last argument provides a value from a file. ..., http_post([ protocol(http), host(Host), port(Port), path(ActionPath) ], form_data([ repository = Repository, dataFormat = DataFormat, baseURI = BaseURI, verifyData = Verify, data = file(File) ]), _Reply, []), ..., \nList If the argument is a plain list, it is sent using the MIME type multipart/mixed and packed using mime_pack/3. See mime_pack/3 for details on the argument format.\n\n", "prefix":"http_post_data" }, "http/http_header:http_read_header/2": { "body":"http_read_header(${1:Fd}, ${2:Header})$3\n$0", "description":"[det]http_read_header(+Fd, -Header).\nRead Name: Value lines from FD until an empty line is encountered. Field-name are converted to Prolog conventions (all lower, _ instead of -): Content-Type: text/html --> content_type(text/html)", "prefix":"http_read_header" }, "http/http_header:http_read_reply_header/2": { "body":"http_read_reply_header(${1:FdIn}, ${2:Reply})$3\n$0", "description":"http_read_reply_header(+FdIn, -Reply).\nRead the HTTP reply header. Throws an exception if the current input does not contain a valid reply header.", "prefix":"http_read_reply_header" }, "http/http_header:http_read_request/2": { "body":"http_read_request(${1:FdIn}, ${2:Request})$3\n$0", "description":"[det]http_read_request(+FdIn:stream, -Request).\nRead an HTTP request-header from FdIn and return the broken-down request fields as +Name(+Value) pairs in a list. Request is unified to end_of_file if FdIn is at the end of input.", "prefix":"http_read_request" }, "http/http_header:http_reply/2": { "body":"http_reply(${1:Data}, ${2:Out})$3\n$0", "description":"[det]http_reply(+Data, +Out:stream).\n", "prefix":"http_reply" }, "http/http_header:http_reply/3": { "body":"http_reply(${1:Data}, ${2:Out}, ${3:HdrExtra})$4\n$0", "description":"[det]http_reply(+Data, +Out:stream, +HdrExtra).\n", "prefix":"http_reply" }, "http/http_header:http_reply/4": { "body":"http_reply(${1:Data}, ${2:Out}, ${3:HdrExtra}, ${4:Code})$5\n$0", "description":"[det]http_reply(+Data, +Out:stream, +HdrExtra, -Code).\n", "prefix":"http_reply" }, "http/http_header:http_reply/5": { "body":"http_reply(${1:Data}, ${2:Out}, ${3:HdrExtra}, ${4:Context}, ${5:Code})$6\n$0", "description":"[det]http_reply(+Data, +Out:stream, +HdrExtra, +Context, -Code).\n", "prefix":"http_reply" }, "http/http_header:http_reply/6": { "body":"http_reply(${1:Data}, ${2:Out}, ${3:HdrExtra}, ${4:Context}, ${5:Request}, ${6:Code})$7\n$0", "description":"[det]http_reply(+Data, +Out:stream, +HdrExtra, +Context, +Request, -Code).\nCompose a complete HTTP reply from the term Data using additional headers from HdrExtra to the output stream Out. ExtraHeader is a list of Field(Value). Data is one of: html(HTML): HTML tokens as produced by html/3 from html_write.pl\n\nfile(+MimeType, +FileName): Reply content of FileName using MimeType\n\nfile(+MimeType, +FileName, +Range): Reply partial content of FileName with given MimeType\n\ntmp_file(+MimeType, +FileName): Same as file, but do not include modification time\n\nbytes(+MimeType, +Bytes): Send a sequence of Bytes with the indicated MimeType. Bytes is either a string of character codes 0..255 or list of integers in the range 0..255. Out-of-bound codes result in a representation error exception.\n\nstream(+In, +Len): Reply content of stream.\n\ncgi_stream(+In, +Len): Reply content of stream, which should start with an HTTP header, followed by a blank line. This is the typical output from a CGI script.\n\nStatus: HTTP status report as defined by http_status_reply/4.\n\n HdrExtra provides additional reply-header fields, encoded as Name(Value). It can also contain a field content_length(-Len) to retrieve the value of the Content-length header that is replied. Code is the numeric HTTP status code sent To be done: Complete documentation\n\n ", "prefix":"http_reply" }, "http/http_header:http_reply_header/3": { "body":"http_reply_header(${1:Out}, ${2:What}, ${3:HdrExtra})$4\n$0", "description":"[det]http_reply_header(+Out:stream, +What, +HdrExtra).\nCreate a reply header using reply_header/5 and send it to Stream.", "prefix":"http_reply_header" }, "http/http_header:http_schedule_logrotate/2": { "body":"http_schedule_logrotate(${1:When}, ${2:Options})$3\n$0", "description":"http_schedule_logrotate(When, Options).\nSchedule log rotation based on maintenance broadcasts. When is one of: daily(Hour:Min): Run each day at Hour:Min. Min is rounded to a multitude of 5.\n\nweekly(Day, Hour:Min): Run at the given Day and Time each week. Day is either a number 1..7 (1 is Monday) or a weekday name or abbreviation.\n\nmonthly(DayOfTheMonth, Hour:Min): Run each month at the given Day (1..31). Note that not all months have all days.\n\n This must be used with a timer that broadcasts a maintenance(_,_) message (see broadcast/1). Such a timer is part of library(http/http_unix_daemon).\n\n", "prefix":"http_schedule_logrotate" }, "http/http_header:http_status_reply/4": { "body":"http_status_reply(${1:Status}, ${2:Out}, ${3:HdrExtra}, ${4:Code})$5\n$0", "description":"[det]http_status_reply(+Status, +Out, +HdrExtra, -Code).\n", "prefix":"http_status_reply" }, "http/http_header:http_status_reply/5": { "body":"http_status_reply(${1:Status}, ${2:Out}, ${3:HdrExtra}, ${4:Context}, ${5:Code})$6\n$0", "description":"[det]http_status_reply(+Status, +Out, +HdrExtra, +Context, -Code).\n", "prefix":"http_status_reply" }, "http/http_header:http_status_reply/6": { "body":"http_status_reply(${1:Status}, ${2:Out}, ${3:HdrExtra}, ${4:Context}, ${5:Request}, ${6:Code})$7\n$0", "description":"[det]http_status_reply(+Status, +Out, +HdrExtra, +Context, +Request, -Code).\nEmit HTML non-200 status reports. Such requests are always sent as UTF-8 documents. Status can be one of the following: \n\nauthorise(Method): Challenge authorization. Method is one of basic(Realm)\ndigest(Digest)\n\n\n\nauthorise(basic, Realm): Same as authorise(basic(Realm)). Deprecated.\n\nbad_request(ErrorTerm): busy: created(Location): forbidden(Url): moved(To): moved_temporary(To): no_content: not_acceptable(WhyHtml): not_found(Path): method_not_allowed(Method, Path): not_modified: resource_error(ErrorTerm): see_other(To): switching_protocols(Goal, Options): server_error(ErrorTerm): unavailable(WhyHtml): \n\n ", "prefix":"http_status_reply" }, "http/http_header:http_timestamp/2": { "body":"http_timestamp(${1:Time}, ${2:Text})$3\n$0", "description":"[det]http_timestamp(+Time:timestamp, -Text:atom).\nGenerate a description of a Time in HTTP format (RFC1123)", "prefix":"http_timestamp" }, "http/http_header:http_update_connection/4": { "body":"http_update_connection(${1:CGIHeader}, ${2:Request}, ${3:Connection}, ${4:Header})$5\n$0", "description":"http_update_connection(+CGIHeader, +Request, -Connection, -Header).\nMerge keep-alive information from Request and CGIHeader into Header.", "prefix":"http_update_connection" }, "http/http_header:http_update_encoding/3": { "body":"http_update_encoding(${1:HeaderIn}, ${2:Encoding}, ${3:HeaderOut})$4\n$0", "description":"http_update_encoding(+HeaderIn, -Encoding, -HeaderOut).\nAllow for rewrite of the header, adjusting the encoding. We distinguish three options. If the user announces `text', we always use UTF-8 encoding. If the user announces charset=utf-8 we use UTF-8 and otherwise we use octet (raw) encoding. Alternatively we could dynamically choose for ASCII, ISO-Latin-1 or UTF-8.", "prefix":"http_update_encoding" }, "http/http_header:http_update_transfer/4": { "body":"http_update_transfer(${1:Request}, ${2:CGIHeader}, ${3:Transfer}, ${4:Header})$5\n$0", "description":"http_update_transfer(+Request, +CGIHeader, -Transfer, -Header).\nDecide on the transfer encoding from the Request and the CGI header. The behaviour depends on the setting http:chunked_transfer. If never, even explitic requests are ignored. If on_request, chunked encoding is used if requested through the CGI header and allowed by the client. If if_possible, chunked encoding is used whenever the client allows for it, which is interpreted as the client supporting HTTP 1.1 or higher. Chunked encoding is more space efficient and allows the client to start processing partial results. The drawback is that errors lead to incomplete pages instead of a nicely formatted complete page.\n\n", "prefix":"http_update_transfer" }, "http/http_host:http_current_host/4": { "body":"http_current_host(${1:Request}, ${2:Hostname}, ${3:Port}, ${4:Options})$5\n$0", "description":"[det]http_current_host(?Request, -Hostname, -Port, +Options).\n deprecated: Use http_public_host/4 (same semantics)\n\n ", "prefix":"http_current_host" }, "http/http_host:http_public_host/4": { "body":"http_public_host(${1:Request}, ${2:Hostname}, ${3:Port}, ${4:Options})$5\n$0", "description":"[det]http_public_host(?Request, -Hostname, -Port, +Options).\nCurrent global host and port of the HTTP server. This is the basis to form absolute address, which we need for redirection based interaction such as the OpenID protocol. Options are: global(+Bool): If true (default false), try to replace a local hostname by a world-wide accessible name.\n\n This predicate performs the following steps to find the host and port: \n\n\n\nUse the settings http:public_host and http:public_port\nUse X-Forwarded-Host header, which applies if this server runs behind a proxy.\nUse the Host header, which applies for HTTP 1.1 if we are contacted directly.\nUse gethostname/1 to find the host and http_current_server/2 to find the port.\n\n Request is the current request. If it is left unbound, and the request is needed, it is obtained with http_current_request/1. ", "prefix":"http_public_host" }, "http/http_host:http_public_host_url/2": { "body":"http_public_host_url(${1:Request}, ${2:URL})$3\n$0", "description":"[det]http_public_host_url(+Request, -URL).\nTrue when URL is the public URL at which this server can be contacted. This value is not easy to obtain. See http_public_host/4 for the hardest part: find the host and port.", "prefix":"http_public_host_url" }, "http/http_host:http_public_url/2": { "body":"http_public_url(${1:Request}, ${2:URL})$3\n$0", "description":"[det]http_public_url(+Request, -URL).\nTrue when URL is an absolute URL for the current request. Typically, the login page should redirect to this URL to avoid losing the session.", "prefix":"http_public_url" }, "http/http_host:http_relative_path/2": { "body":"http_relative_path(${1:AbsPath}, ${2:RelPath})$3\n$0", "description":"http_relative_path(+AbsPath, -RelPath).\nConvert an absolute path (without host, fragment or search) into a path relative to the current page, defined as the path component from the current request (see http_current_request/1). This call is intended to create reusable components returning relative paths for easier support of reverse proxies. If ---for whatever reason--- the conversion is not possible it simply unifies RelPath to AbsPath.\n\n", "prefix":"http_relative_path" }, "http/http_inetd:http_server/2": { "body": ["http_server(${1:Goal}, ${2:Options})$3\n$0" ], "description":" http_server(:Goal, +Options)\n\n Start the server from inetd. This is really easy as user_input\n is connected to the HTTP input and user_output is the place to\n write our reply to.", "prefix":"http_server" }, "http/http_json:http_read_json/2": { "body": ["http_read_json(${1:Request}, ${2:JSON})$3\n$0" ], "description":" http_read_json(+Request, -JSON) is det.\n http_read_json(+Request, -JSON, +Options) is det.\n\n Extract JSON data posted to this HTTP request. Options are\n passed to json_read/3. In addition, this option is processed:\n\n * json_object(+As)\n One of =term= (default) to generate a classical Prolog\n term or =dict= to exploit the SWI-Prolog version 7 data type\n extensions. See json_read_dict/3.\n\n @error domain_error(mimetype, Found) if the mimetype is\n not known (see json_type/1).\n @error domain_error(method, Method) if the request is not\n a =POST= or =PUT= request.", "prefix":"http_read_json" }, "http/http_json:http_read_json/3": { "body": ["http_read_json(${1:Request}, ${2:JSON}, ${3:Options})$4\n$0" ], "description":" http_read_json(+Request, -JSON) is det.\n http_read_json(+Request, -JSON, +Options) is det.\n\n Extract JSON data posted to this HTTP request. Options are\n passed to json_read/3. In addition, this option is processed:\n\n * json_object(+As)\n One of =term= (default) to generate a classical Prolog\n term or =dict= to exploit the SWI-Prolog version 7 data type\n extensions. See json_read_dict/3.\n\n @error domain_error(mimetype, Found) if the mimetype is\n not known (see json_type/1).\n @error domain_error(method, Method) if the request is not\n a =POST= or =PUT= request.", "prefix":"http_read_json" }, "http/http_json:http_read_json_dict/2": { "body": ["http_read_json_dict(${1:Request}, ${2:Dict})$3\n$0" ], "description":" http_read_json_dict(+Request, -Dict) is det.\n http_read_json_dict(+Request, -Dict, +Options) is det.\n\n Similar to http_read_json/2,3, but by default uses the version 7\n extended datatypes.", "prefix":"http_read_json_dict" }, "http/http_json:http_read_json_dict/3": { "body": ["http_read_json_dict(${1:Request}, ${2:Dict}, ${3:Options})$4\n$0" ], "description":" http_read_json_dict(+Request, -Dict) is det.\n http_read_json_dict(+Request, -Dict, +Options) is det.\n\n Similar to http_read_json/2,3, but by default uses the version 7\n extended datatypes.", "prefix":"http_read_json_dict" }, "http/http_json:reply_json/1": { "body": ["reply_json(${1:JSONTerm})$2\n$0" ], "description":" reply_json(+JSONTerm) is det.\n reply_json(+JSONTerm, +Options) is det.\n\n Formulate a JSON HTTP reply. See json_write/2 for details.\n The processed options are listed below. Remaining options are\n forwarded to json_write/3.\n\n * content_type(+Type)\n The default =|Content-type|= is =|application/json;\n charset=UTF8|=. =|charset=UTF8|= should not be required\n because JSON is defined to be UTF-8 encoded, but some\n clients insist on it.\n\n * status(+Code)\n The default status is 200. REST API functions may use\n other values from the 2XX range, such as 201 (created).\n\n * json_object(+As)\n One of =term= (classical json representation) or =dict=\n to use the new dict representation. If omitted and Term\n is a dict, =dict= is assumed. SWI-Prolog Version 7.", "prefix":"reply_json" }, "http/http_json:reply_json/2": { "body": ["reply_json(${1:JSONTerm}, ${2:Options})$3\n$0" ], "description":" reply_json(+JSONTerm) is det.\n reply_json(+JSONTerm, +Options) is det.\n\n Formulate a JSON HTTP reply. See json_write/2 for details.\n The processed options are listed below. Remaining options are\n forwarded to json_write/3.\n\n * content_type(+Type)\n The default =|Content-type|= is =|application/json;\n charset=UTF8|=. =|charset=UTF8|= should not be required\n because JSON is defined to be UTF-8 encoded, but some\n clients insist on it.\n\n * status(+Code)\n The default status is 200. REST API functions may use\n other values from the 2XX range, such as 201 (created).\n\n * json_object(+As)\n One of =term= (classical json representation) or =dict=\n to use the new dict representation. If omitted and Term\n is a dict, =dict= is assumed. SWI-Prolog Version 7.", "prefix":"reply_json" }, "http/http_json:reply_json_dict/1": { "body": ["reply_json_dict(${1:JSONTerm})$2\n$0" ], "description":" reply_json_dict(+JSONTerm) is det.\n reply_json_dict(+JSONTerm, +Options) is det.\n\n As reply_json/1 and reply_json/2, but assumes the new dict based\n data representation. Note that this is the default if the outer\n object is a dict. This predicate is needed to serialize a list\n of objects correctly and provides consistency with\n http_read_json_dict/2 and friends.", "prefix":"reply_json_dict" }, "http/http_json:reply_json_dict/2": { "body": ["reply_json_dict(${1:JSONTerm}, ${2:Options})$3\n$0" ], "description":" reply_json_dict(+JSONTerm) is det.\n reply_json_dict(+JSONTerm, +Options) is det.\n\n As reply_json/1 and reply_json/2, but assumes the new dict based\n data representation. Note that this is the default if the outer\n object is a dict. This predicate is needed to serialize a list\n of objects correctly and provides consistency with\n http_read_json_dict/2 and friends.", "prefix":"reply_json_dict" }, "http/http_log:http_current_host/4": { "body":"http_current_host(${1:Request}, ${2:Hostname}, ${3:Port}, ${4:Options})$5\n$0", "description":"[det]http_current_host(?Request, -Hostname, -Port, +Options).\n deprecated: Use http_public_host/4 (same semantics)\n\n ", "prefix":"http_current_host" }, "http/http_log:http_log/2": { "body":"http_log(${1:Format}, ${2:Args})$3\n$0", "description":"[det]http_log(+Format, +Args).\nWrite message from Format and Args to log-stream. See format/2 for details. Succeed without side effects if logging is not enabled.", "prefix":"http_log" }, "http/http_log:http_log_close/1": { "body":"http_log_close(${1:Reason})$2\n$0", "description":"[det]http_log_close(+Reason).\nIf there is a currently open HTTP logfile, close it after adding a term server(Reason, Time). to the logfile. This call is intended for cooperation with the Unix logrotate facility using the following schema: \n\nMove logfile (the HTTP server keeps writing to the moved file)\nInform the server using an HTTP request that calls http_log_close/1\nCompress the moved logfile\n\n author: Suggested by Jacco van Ossenbruggen\n\n ", "prefix":"http_log_close" }, "http/http_log:http_log_stream/1": { "body":"http_log_stream(${1:Stream})$2\n$0", "description":"[semidet]http_log_stream(-Stream).\nTrue when Stream is a stream to the opened HTTP log file. Opens the log file in append mode if the file is not yet open. The log file is determined from the setting http:logfile. If this setting is set to the empty atom (''), this predicate fails. If a file error is encountered, this is reported using print_message/2, after which this predicate silently fails.\n\n", "prefix":"http_log_stream" }, "http/http_log:http_logrotate/1": { "body":"http_logrotate(${1:Options})$2\n$0", "description":"[det]http_logrotate(+Options).\nRotate the available log files. Note that there are two ways to deal with the rotation of log files: \n\nUse the OS log rotation facility. In that case the OS must (1) move the logfile and (2) have something calling http_log_close/1 to close the (moved) file and make this server create a new one on the next log message. If library(http/http_unix_daemon) is used, closing is achieved by sending SIGHUP or SIGUSR1 to the process.\nCall this predicate at scheduled intervals. This can be achieved by calling http_schedule_logrotate/2 in the context of library(http/http_unix_daemon) which schedules the maintenance actions.\n\n Options: \n\nmin_size(+Bytes): Do not rotate if the log file is smaller than Bytes. The default is 1Mbytes.\n\nkeep_logs(+Count): Number of rotated log files to keep (default 10)\n\ncompress_logs(+Format): Compress the log files to the given format.\n\nbackground(+Boolean): If true, rotate the log files in the background.\n\n ", "prefix":"http_logrotate" }, "http/http_log:http_schedule_logrotate/2": { "body":"http_schedule_logrotate(${1:When}, ${2:Options})$3\n$0", "description":"http_schedule_logrotate(When, Options).\nSchedule log rotation based on maintenance broadcasts. When is one of: daily(Hour:Min): Run each day at Hour:Min. Min is rounded to a multitude of 5.\n\nweekly(Day, Hour:Min): Run at the given Day and Time each week. Day is either a number 1..7 (1 is Monday) or a weekday name or abbreviation.\n\nmonthly(DayOfTheMonth, Hour:Min): Run each month at the given Day (1..31). Note that not all months have all days.\n\n This must be used with a timer that broadcasts a maintenance(_,_) message (see broadcast/1). Such a timer is part of library(http/http_unix_daemon).\n\n", "prefix":"http_schedule_logrotate" }, "http/http_log:nolog/1": { "body":"nolog(${1:HTTPField})$2\n$0", "description":"[multifile]nolog(+HTTPField).\nMultifile predicate that can be defined to hide request parameters from the request logfile.", "prefix":"nolog" }, "http/http_log:nolog_post_content_type/1": { "body":"nolog_post_content_type(${1:Type})$2\n$0", "description":"[semidet,multifile]nolog_post_content_type(+Type).\nMultifile hook called with the Content-type header. If the hook succeeds, the POST data is not logged. For example, to stop logging anything but application/json messages: \n\n:- multifile http_log:nolog_post_content_type/1.\n\nhttp_log:nolog_post_content_type(Type) :-\n Type \\= (application/json).\n\n Type is a term MainType/SubType ", "prefix":"nolog_post_content_type" }, "http/http_log:password_field/1": { "body":"password_field(${1:Field})$2\n$0", "description":"[semidet,multifile]password_field(+Field).\nMultifile predicate that can be defined to hide passwords from the logfile.", "prefix":"password_field" }, "http/http_log:post_data_encoded/2": { "body":"post_data_encoded(${1:Bytes}, ${2:Encoded})$3\n$0", "description":"[det]post_data_encoded(?Bytes:string, ?Encoded:string).\nEncode the POST body for inclusion into the HTTP log file. The POST data is (in/de)flated using zopen/3 and base64 encoded using base64/3. The encoding makes long text messages shorter and keeps readable logfiles if binary data is posted.", "prefix":"post_data_encoded" }, "http/http_open:http_close_keep_alive/1": { "body":"http_close_keep_alive(${1:Address})$2\n$0", "description":"[det]http_close_keep_alive(+Address).\nClose all keep-alive connections matching Address. Address is of the form Host:Port. In particular, http_close_keep_alive(_) closes all currently known keep-alive connections.", "prefix":"http_close_keep_alive" }, "http/http_open:http_open/3": { "body":"http_open(${1:URL}, ${2:Stream}, ${3:Options})$4\n$0", "description":"[det]http_open(+URL, -Stream, +Options).\nOpen the data at the HTTP server as a Prolog stream. URL is either an atom specifying a URL or a list representing a broken-down URL as specified below. After this predicate succeeds the data can be read from Stream. After completion this stream must be closed using the built-in Prolog predicate close/1. Options provides additional options: authenticate(+Boolean): If false (default true), do not try to automatically authenticate the client if a 401 (Unauthorized) status code is received.\n\nauthorization(+Term): Send authorization. See also http_set_authorization/2. Supported schemes: basic(+User, +Password)HTTP Basic authentication.bearer(+Token)HTTP Bearer authentication.digest(+User, +Password)HTTP Digest authentication. This option is only provided if the plugin library(http/http_digest) is also loaded. \n\nconnection(+Connection): Specify the Connection header. Default is close. The alternative is Keep-alive. This maintains a pool of available connections as determined by keep_connection/1. The library(http/websockets) uses Keep-alive, Upgrade. Keep-alive connections can be closed explicitly using http_close_keep_alive/1. Keep-alive connections may significantly improve repetitive requests on the same server, especially if the IP route is long, HTTPS is used or the connection uses a proxy.\n\nfinal_url(-FinalURL): Unify FinalURL with the final destination. This differs from the original URL if the returned head of the original indicates an HTTP redirect (codes 301, 302 or 303). Without a redirect, FinalURL is the same as URL if URL is an atom, or a URL constructed from the parts.\n\nheader(Name, -AtomValue): If provided, AtomValue is unified with the value of the indicated field in the reply header. Name is matched case-insensitive and the underscore (_) matches the hyphen (-). Multiple of these options may be provided to extract multiple header fields. If the header is not available AtomValue is unified to the empty atom ('').\n\nheaders(-List): If provided, List is unified with a list of Name(Value) pairs corresponding to fields in the reply header. Name and Value follow the same conventions used by the header(Name,Value) option.\n\nmethod(+Method): One of get (default), head, delete, post, put or patch. The head message can be used in combination with the header(Name, Value) option to access information on the resource without actually fetching the resource itself. The returned stream must be closed immediately. If post(Data) is provided, the default is post.\n\nsize(-Size): Size is unified with the integer value of Content-Length in the reply header.\n\nversion(-Version): Version is a pair Major-Minor, where Major and Minor are integers representing the HTTP version in the reply header.\n\nrange(+Range): Ask for partial content. Range is a term Unit(From,To), where From is an integer and To is either an integer or the atom end. HTTP 1.1 only supports Unit = bytes. E.g., to ask for bytes 1000-1999, use the option range(bytes(1000,1999))\n\nredirect(+Boolean): If false (default true), do not automatically redirect if a 3XX code is received. Must be combined with status_code(Code) and one of the header options to read the redirect reply. In particular, without status_code(Code) a redirect is mapped to an exception.\n\nstatus_code(-Code): If this option is present and Code unifies with the HTTP status code, do not translate errors (4xx, 5xx) into an exception. Instead, http_open/3 behaves as if 200 (success) is returned, providing the application to read the error document from the returned stream.\n\noutput(-Out): Unify the output stream with Out and do not close it. This can be used to upgrade a connection.\n\ntimeout(+Timeout): If provided, set a timeout on the stream using set_stream/2. With this option if no new data arrives within Timeout seconds the stream raises an exception. Default is to wait forever (infinite).\n\npost(+Data): Issue a POST request on the HTTP server. Data is handed to http_post_data/3.\n\nproxy(+Host:Port): Use an HTTP proxy to connect to the outside world. See also socket:proxy_for_url/3. This option overrules the proxy specification defined by socket:proxy_for_url/3.\n\nproxy(+Host, +Port): Synonym for proxy(+Host:Port). Deprecated.\n\nproxy_authorization(+Authorization): Send authorization to the proxy. Otherwise the same as the authorization option.\n\nbypass_proxy(+Boolean): If true, bypass proxy hooks. Default is false.\n\nrequest_header(Name=Value): Additional name-value parts are added in the order of appearance to the HTTP request header. No interpretation is done.\n\nmax_redirect(+Max): Sets the maximum length of a redirection chain. This is needed for some IRIs that redirect indefinitely to other IRIs without looping (e.g., redirecting to IRIs with a random element in them). Max must be either a non-negative integer or the atom infinite. The default value is 10.\n\nuser_agent(+Agent): Defines the value of the User-Agent field of the HTTP header. Default is SWI-Prolog.\n\n The hook http:open_options/2 can be used to provide default options based on the broken-down URL. The option status_code(-Code) is particularly useful to query REST interfaces that commonly return status codes other than 200 that need to be be processed by the client code.\n\nURL is either an atom or string (url) or a list of parts. ", "prefix":"http_open" }, "http/http_open:http_set_authorization/2": { "body":"http_set_authorization(${1:URL}, ${2:Authorization})$3\n$0", "description":"[det]http_set_authorization(+URL, +Authorization).\nSet user/password to supply with URLs that have URL as prefix. If Authorization is the atom -, possibly defined authorization is cleared. For example: \n\n?- http_set_authorization('http://www.example.com/private/',\n basic('John', 'Secret'))\n\n To be done: Move to a separate module, so http_get/3, etc. can use this too.\n\n ", "prefix":"http_set_authorization" }, "http/http_openid:http_add_worker/2": { "body":"http_add_worker(${1:Port}, ${2:Options})$3\n$0", "description":"http_add_worker(+Port, +Options).\nAdd a new worker to the HTTP server for port Port. Options overrule the default queue options. The following additional options are processed: max_idle_time(+Seconds): The created worker will automatically terminate if there is no new work within Seconds.\n\n ", "prefix":"http_add_worker" }, "http/http_openid:http_certificate_hook/3": { "body":"http_certificate_hook(${1:CertFile}, ${2:KeyFile}, ${3:Password})$4\n$0", "description":"[semidet,multifile]http_certificate_hook(+CertFile, +KeyFile, -Password).\nHook called before starting the server if the --https option is used. This hook may be used to create or refresh the certificate. If the hook binds Password to a string, this string will be used to decrypt the server private key as if the --password=Password option was given.", "prefix":"http_certificate_hook" }, "http/http_openid:http_current_request/1": { "body":"http_current_request(${1:Request})$2\n$0", "description":"http_current_request(-Request).\nGet access to the currently executing request. Request is the same as handed to Goal of http_wrapper/5 after applying rewrite rules as defined by http:request_expansion/2. Raises an existence error if there is no request in progress.", "prefix":"http_current_request" }, "http/http_openid:http_current_worker/2": { "body":"http_current_worker(${1:Port}, ${2:ThreadID})$3\n$0", "description":"http_current_worker(?Port, ?ThreadID).\nTrue if ThreadID is the identifier of a Prolog thread serving Port. This predicate is motivated to allow for the use of arbitrary interaction with the worker thread for development and statistics.", "prefix":"http_current_worker" }, "http/http_openid:http_daemon/0": { "body":"http_daemon$1\n$0", "description":"http_daemon.\nStart the HTTP server as a daemon process. This predicate processes the commandline arguments below. Commandline arguments that specify servers are processed in the order they appear using the following schema: \n\nArguments that act as default for all servers.\n--http=Spec or --https=Spec is followed by arguments for that server until the next --http=Spec or --https=Spec or the end of the options.\nIf no --http=Spec or --https=Spec appears, one HTTP server is created from the specified parameters. Examples: --workers=10 --http --https --http=8080 --https=8443 --http=localhost:8080 --workers=1 --https=8443 --workers=25 \n\n --port=Port: Start HTTP server at Port. It requires root permission and the option --user=User to open ports below 1000. The default port is 80. If --https is used, the default port is 443.\n\n--ip=IP: Only listen to the given IP address. Typically used as --ip=localhost to restrict access to connections from localhost if the server itself is behind an (Apache) proxy server running on the same host.\n\n--debug=Topic: Enable debugging Topic. See debug/3.\n\n--syslog=Ident: Write debug messages to the syslog daemon using Ident\n\n--user=User: When started as root to open a port below 1000, this option must be provided to switch to the target user for operating the server. The following actions are performed as root, i.e., before switching to User: open the socket(s)\nwrite the pidfile\nsetup syslog interaction\nRead the certificate, key and password file (--pwfile=File)\n\n\n\n--group=Group: May be used in addition to --user. If omitted, the login group of the target user is used.\n\n--pidfile=File: Write the PID of the daemon process to File.\n\n--output=File: Send output of the process to File. By default, all Prolog console output is discarded.\n\n--fork[=Bool]: If given as --no-fork or --fork=false, the process runs in the foreground.\n\n--http[=(Bool|Port|BindTo:Port)]: Create a plain HTTP server. If the argument is missing or true, create at the specified or default address. Else use the given port and interface. Thus, --http creates a server at port 80, --http=8080 creates one at port 8080 and --http=localhost:8080 creates one at port 8080 that is only accessible from localhost.\n\n--https[=(Bool|Port|BindTo:Port)]: As --http, but creates an HTTPS server. Use --certfile, --keyfile, -pwfile, --password and --cipherlist to configure SSL for this server.\n\n--certfile=File: The server certificate for HTTPS.\n\n--keyfile=File: The server private key for HTTPS.\n\n--pwfile=File: File holding the password for accessing the private key. This is preferred over using --password=PW as it allows using file protection to avoid leaking the password. The file is read before the server drops privileges when started with the --user option.\n\n--password=PW: The password for accessing the private key. See also `--pwfile`.\n\n--cipherlist=Ciphers: One or more cipher strings separated by colons. See the OpenSSL documentation for more information. Default is DEFAULT.\n\n--interactive[=Bool]: If true (default false) implies --no-fork and presents the Prolog toplevel after starting the server.\n\n--gtrace=[Bool]: Use the debugger to trace http_daemon/1.\n\n--sighup=Action: Action to perform on kill -HUP . Default is reload (running make/0). Alternative is quit, stopping the server.\n\n Other options are converted by argv_options/3 and passed to http_server/1. For example, this allows for: \n\n--workers=Count: Set the number of workers for the multi-threaded server.\n\n http_daemon/0 is defined as below. The start code for a specific server can use this as a starting point, for example for specifying defaults. \n\n\n\nhttp_daemon :-\n current_prolog_flag(argv, Argv),\n argv_options(Argv, _RestArgv, Options),\n http_daemon(Options).\n\n See also: http_daemon/1\n\n ", "prefix":"http_daemon" }, "http/http_openid:http_daemon/1": { "body":"http_daemon(${1:Options})$2\n$0", "description":"http_daemon(+Options).\nStart the HTTP server as a daemon process. This predicate processes a Prolog option list. It is normally called from http_daemon/0, which derives the option list from the command line arguments. Error handling depends on whether or not interactive(true) is in effect. If so, the error is printed before entering the toplevel. In non-interactive mode this predicate calls halt(1).\n\n", "prefix":"http_daemon" }, "http/http_openid:http_parameters/2": { "body":"http_parameters(${1:Request}, ${2:Parameters})$3\n$0", "description":"http_parameters(+Request, ?Parameters).\nThe predicate is passes the Request as provided to the handler goal by http_wrapper/5 as well as a partially instantiated lists describing the requested parameters and their types. Each parameter specification in Parameters is a term of the format Name(-Value, +Options) . Options is a list of option terms describing the type, default, etc. If no options are specified the parameter must be present and its value is returned in Value as an atom. If a parameter is missing the exception error(existence_error(http_parameter, Name), _) is thrown which. If the argument cannot be converted to the requested type, a error(existence_error(Type, Value), _) is raised, where the error context indicates the HTTP parameter. If not caught, the server translates both errors into a 400 Bad request HTTP message. \n\nOptions fall into three categories: those that handle presence of the parameter, those that guide conversion and restrict types and those that support automatic generation of documention. First, the presence-options: \n\ndefault(Default): If the named parameter is missing, Value is unified to Default.\n\noptional(true): If the named parameter is missing, Value is left unbound and no error is generated.\n\nlist(Type): The same parameter may not appear or appear multiple times. If this option is present, default and optional are ignored and the value is returned as a list. Type checking options are processed on each value.\n\nzero_or_more: Deprecated. Use list(Type).\n\n The type and conversion options are given below. The type-language can be extended by providing clauses for the multifile hook http:convert_parameter/3. \n\n;(Type1, Type2): Succeed if either Type1 or Type2 applies. It allows for checks such as (nonneg;oneof([infinite])) to specify an integer or a symbolic value.\n\noneof(List): Succeeds if the value is member of the given list.\n\nlength > N: Succeeds if value is an atom of more than N characters.\n\nlength >= N: Succeeds if value is an atom of more or than equal to N characters.\n\nlength < N: Succeeds if value is an atom of less than N characters.\n\nlength =< N: Succeeds if value is an atom of length than or equal to N characters.\n\natom: No-op. Allowed for consistency.\n\nstring: Convert value to a string.\n\nbetween(+Low, +High): Convert value to a number and if either Low or High is a float, force value to be a float. Then check that the value is in the given range, which includes the boundaries.\n\nboolean: Translate =true=, =yes=, =on= and '1' into =true=; =false=, =no=, =off= and '0' into =false= and raises an error otherwise.\n\nfloat: Convert value to a float. Integers are transformed into float. Throws a type-error otherwise.\n\ninteger: Convert value to an integer. Throws a type-error otherwise.\n\nnonneg: Convert value to a non-negative integer. Throws a type-error of the value cannot be converted to an integer and a domain-error otherwise.\n\nnumber: Convert value to a number. Throws a type-error otherwise.\n\n The last set of options is to support automatic generation of HTTP API documentation from the sources.2This facility is under development in ClioPatria; see http_help.pl. \n\ndescription(+Atom): Description of the parameter in plain text.\n\ngroup(+Parameters, +Options): Define a logical group of parameters. Parameters are processed as normal. Options may include a description of the group. Groups can be nested.\n\n Below is an example \n\n\n\nreply(Request) :-\n http_parameters(Request,\n [ title(Title, [ optional(true) ]),\n name(Name, [ length >= 2 ]),\n age(Age, [ between(0, 150) ])\n ]),\n ...\n\n Same as http_parameters(Request, Parameters,[])\n\n", "prefix":"http_parameters" }, "http/http_openid:http_parameters/3": { "body":"http_parameters(${1:Request}, ${2:Parameters}, ${3:Options})$4\n$0", "description":"http_parameters(+Request, ?Parameters, +Options).\nIn addition to http_parameters/2, the following options are defined. form_data(-Data): Return the entire set of provided Name=Value pairs from the GET or POST request. All values are returned as atoms.\n\nattribute_declarations(:Goal): If a parameter specification lacks the parameter options, call call(Goal, +ParamName, -Options) to find the options. Intended to share declarations over many calls to http_parameters/3. Using this construct the above can be written as below. \n\nreply(Request) :-\n http_parameters(Request,\n [ title(Title),\n name(Name),\n age(Age)\n ],\n [ attribute_declarations(param)\n ]),\n ...\n\nparam(title, [optional(true)]).\nparam(name, [length >= 2 ]).\nparam(age, [integer]).\n\n \n\n ", "prefix":"http_parameters" }, "http/http_openid:http_read_request/2": { "body":"http_read_request(${1:Stream}, ${2:Request})$3\n$0", "description":"http_read_request(+Stream, -Request).\nReads an HTTP request from Stream and unify Request with the parsed request. Request is a list of Name(Value) elements. It provides a number of predefined elements for the result of parsing the first line of the request, followed by the additional request parameters. The predefined fields are: host(Host): If the request contains Host: Host, Host is unified with the host-name. If Host is of the format : Host only describes and a field port(Port) where Port is an integer is added.\n\ninput(Stream): The Stream is passed along, allowing to read more data or requests from the same stream. This field is always present.\n\nmethod(Method): Method is the HTTP method represented as a lower-case atom, e.g., get, put, post. This field is present if the header has been parsed successfully.\n\npath(Path): Path associated to the request. This field is always present.\n\npeer(Peer): Peer is a term ip(A,B,C,D) containing the IP address of the contacting host.\n\nport(Port): Port requested. See host for details.\n\nrequest_uri(RequestURI): This is the untranslated string that follows the method in the request header. It is used to construct the path and search fields of the Request. It is provided because reconstructing this string from the path and search fields may yield a different value due to different usage of percent encoding.\n\nsearch(ListOfNameValue): Search-specification of URI. This is the part after the ?, normally used to transfer data from HTML forms that use the `GET' protocol. In the URL it consists of a www-form-encoded list of Name=Value pairs. This is mapped to a list of Prolog Name=Value terms with decoded names and values. This field is only present if the location contains a search-specification. The URL specification does not demand the query part to be of the form name=value. If the field is syntactically incorrect, ListOfNameValue is bound the the empty list ([]).\n\nhttp_version(Major-Minor): If the first line contains the HTTP/Major.Minor version indicator this element indicate the HTTP version of the peer. Otherwise this field is not present.\n\ncookie(ListOfNameValue): If the header contains a Cookie line, the value of the cookie is broken down in Name=Value pairs, where the Name is the lowercase version of the cookie name as used for the HTTP fields.\n\nset_cookie(set_cookie(Name, Value, Options)): If the header contains a SetCookie line, the cookie field is broken down into the Name of the cookie, the Value and a list of Name=Value pairs for additional options such as expire, path, domain or secure.\n\n If the first line of the request is tagged with HTTP/Major.Minor, http_read_request/2 reads all input upto the first blank line. This header consists of Name:Value fields. Each such field appears as a term Name(Value) in the Request, where Name is canonicalised for use with Prolog. Canonisation implies that the Name is converted to lower case and all occurrences of the - are replaced by _. The value for the Content-length fields is translated into an integer.\n\n", "prefix":"http_read_request" }, "http/http_openid:http_relative_path/2": { "body":"http_relative_path(${1:AbsPath}, ${2:RelPath})$3\n$0", "description":"http_relative_path(+AbsPath, -RelPath).\nConvert an absolute path (without host, fragment or search) into a path relative to the current page, defined as the path component from the current request (see http_current_request/1). This call is intended to create reusable components returning relative paths for easier support of reverse proxies. If ---for whatever reason--- the conversion is not possible it simply unifies RelPath to AbsPath.\n\n", "prefix":"http_relative_path" }, "http/http_openid:http_server/2": { "body":"http_server(${1:Goal}, ${2:Options})$3\n$0", "description":"http_server(:Goal, +Options).\nInitialises and runs http_wrapper/5 in a loop until failure or end-of-file. This server does not support the Port option as the port is specified with the inetd configuration. The only supported option is After.", "prefix":"http_server" }, "http/http_openid:http_server_hook/1": { "body":"http_server_hook(${1:Options})$2\n$0", "description":"[semidet,multifile]http_server_hook(+Options).\nHook that is called to start the HTTP server. This hook must be compatible to http_server(Handler, Options). The default is provided by start_server/1.", "prefix":"http_server_hook" }, "http/http_openid:http_server_property/2": { "body":"http_server_property(${1:Port}, ${2:Property})$3\n$0", "description":"http_server_property(?Port, ?Property).\nTrue if Property is a property of the HTTP server running at Port. Defined properties are: goal(:Goal): Goal used to start the server. This is often http_dispatch/1.\n\nscheme(-Scheme): Scheme is one of http or https.\n\nstart_time(-Time): Time-stamp when the server was created. See format_time/3 for creating a human-readable representation.\n\n ", "prefix":"http_server_property" }, "http/http_openid:http_spawn/2": { "body":"http_spawn(${1:Goal}, ${2:Spec})$3\n$0", "description":"http_spawn(:Goal, +Spec).\nContinue handling this request in a new thread running Goal. After http_spawn/2, the worker returns to the pool to process new requests. In its simplest form, Spec is the name of a thread pool as defined by thread_pool_create/3. Alternatively it is an option list, whose options are passed to thread_create_in_pool/4 if Spec contains pool(Pool) or to thread_create/3 of the pool option is not present. If the dispatch module is used (see section 3.2), spawning is normally specified as an option to the http_handler/3 registration. We recomment the use of thread pools. They allow registration of a set of threads using common characteristics, specify how many can be active and what to do if all threads are active. A typical application may define a small pool of threads with large stacks for computation intensive tasks, and a large pool of threads with small stacks to serve media. The declaration could be the one below, allowing for max 3 concurrent solvers and a maximum backlog of 5 and 30 tasks creating image thumbnails. \n\n\n\n:- use_module(library(thread_pool)).\n\n:- thread_pool_create(compute, 3,\n [ local(20000), global(100000), trail(50000),\n backlog(5)\n ]).\n:- thread_pool_create(media, 30,\n [ local(100), global(100), trail(100),\n backlog(100)\n ]).\n\n:- http_handler('/solve', solve, [spawn(compute)]).\n:- http_handler('/thumbnail', thumbnail, [spawn(media)]).\n\n \n\n", "prefix":"http_spawn" }, "http/http_openid:http_stop_server/2": { "body":"http_stop_server(${1:Port}, ${2:Options})$3\n$0", "description":"http_stop_server(+Port, +Options).\nStop the HTTP server at Port. Halting a server is done gracefully, which means that requests being processed are not abandoned. The Options list is for future refinements of this predicate such as a forced immediate abort of the server, but is currently ignored.", "prefix":"http_stop_server" }, "http/http_openid:http_workers/2": { "body":"http_workers(${1:Port}, ${2:Workers})$3\n$0", "description":"http_workers(+Port, ?Workers).\nQuery or manipulate the number of workers of the server identified by Port. If Workers is unbound it is unified with the number of running servers. If it is an integer greater than the current size of the worker pool new workers are created with the same specification as the running workers. If the number is less than the current size of the worker pool, this predicate inserts a number of `quit' requests in the queue, discarding the excess workers as they finish their jobs (i.e. no worker is abandoned while serving a client). This can be used to tune the number of workers for performance. Another possible application is to reduce the pool to one worker to facilitate easier debugging.\n\n", "prefix":"http_workers" }, "http/http_openid:http_wrapper/5": { "body":"http_wrapper(${1:Goal}, ${2:In}, ${3:Out}, ${4:Connection}, ${5:Options})$6\n$0", "description":"http_wrapper(:Goal, +In, +Out, -Connection, +Options).\nHandle an HTTP request where In is an input stream from the client, Out is an output stream to the client and Goal defines the goal realising the body. Connection is unified to 'Keep-alive' if both ends of the connection want to continue the connection or close if either side wishes to close the connection. This predicate reads an HTTP request-header from In, redirects current output to a memory file and then runs call(Goal, Request), watching for exceptions and failure. If Goal executes successfully it generates a complete reply from the created output. Otherwise it generates an HTTP server error with additional context information derived from the exception. \n\nhttp_wrapper/5 supports the following options: \n\nrequest(-Request): Return the executed request to the caller.\n\npeer(+Peer): Add peer(Peer) to the request header handed to Goal. The format of Peer is defined by tcp_accept/3 from the clib package.\n\n ", "prefix":"http_wrapper" }, "http/http_openid:openid_associate/3": { "body": ["openid_associate(${1:URL}, ${2:Handle}, ${3:Assoc})$4\n$0" ], "description":" openid_associate(?URL, ?Handle, ?Assoc) is det.\n\n Calls openid_associate/4 as\n\n ==\n openid_associate(URL, Handle, Assoc, []).\n ==", "prefix":"openid_associate" }, "http/http_openid:openid_associate/4": { "body": [ "openid_associate(${1:URL}, ${2:Handle}, ${3:Assoc}, ${4:Options})$5\n$0" ], "description":" openid_associate(+URL, -Handle, -Assoc, +Options) is det.\n openid_associate(?URL, +Handle, -Assoc, +Options) is semidet.\n\n Associate with an open-id server. We first check for a still\n valid old association. If there is none or it is expired, we\n esstablish one and remember it. Options:\n\n * ns(URL)\n One of =http://specs.openid.net/auth/2.0= (default) or\n =http://openid.net/signon/1.1=.\n\n @tbd Should we store known associations permanently? Where?", "prefix":"openid_associate" }, "http/http_openid:openid_authenticate/4": { "body": [ "openid_authenticate(${1:Request}, ${2:Server}, ${3:OpenID}, ${4:\n%})$5\n$0" ], "description":" openid_authenticate(+Request, -Server:url, -OpenID:url,\n -ReturnTo:url) is semidet.\n\n Succeeds if Request comes from the OpenID server and confirms\n that User is a verified OpenID user. ReturnTo provides the URL\n to return to.\n\n After openid_verify/2 has redirected the browser to the OpenID\n server, and the OpenID server did its magic, it redirects the\n browser back to this address. The work is fairly trivial. If\n =mode= is =cancel=, the OpenId server denied. If =id_res=, the\n OpenId server replied positive, but we must verify what the\n server told us by checking the HMAC-SHA signature.\n\n This call fails silently if their is no =|openid.mode|= field in\n the request.\n\n @throws openid(cancel)\n if request was cancelled by the OpenId server\n @throws openid(signature_mismatch)\n if the HMAC signature check failed", "prefix":"openid_authenticate" }, "http/http_openid:openid_current_host/3": { "body": ["openid_current_host(${1:Request}, ${2:Host}, ${3:Port})$4\n$0" ], "description":" openid_current_host(Request, Host, Port)\n\n Find current location of the server.\n\n @deprecated New code should use http_current_host/4 with the\n option global(true).", "prefix":"openid_current_host" }, "http/http_openid:openid_current_url/2": { "body": ["openid_current_url(${1:Request}, ${2:URL})$3\n$0" ], "description":" openid_current_url(+Request, -URL) is det.\n\n @deprecated New code should use http_public_url/2 with the\n same semantics.", "prefix":"openid_current_url" }, "http/http_openid:openid_grant/1": { "body": ["openid_grant(${1:Request})$2\n$0" ], "description":" openid_grant(+Request)\n\n Handle the reply from checkid_setup_server/3. If the reply is\n =yes=, check the authority (typically the password) and if all\n looks good redirect the browser to ReturnTo, adding the OpenID\n properties needed by the Relying Party to verify the login.", "prefix":"openid_grant" }, "http/http_openid:openid_logged_in/1": { "body": ["openid_logged_in(${1:OpenID})$2\n$0" ], "description":" openid_logged_in(-OpenID) is semidet.\n\n True if session is associated with OpenID.", "prefix":"openid_logged_in" }, "http/http_openid:openid_login/1": { "body": ["openid_login(${1:OpenID})$2\n$0" ], "description":" openid_login(+OpenID) is det.\n\n Associate the current HTTP session with OpenID. If another\n OpenID is already associated, this association is first removed.", "prefix":"openid_login" }, "http/http_openid:openid_logout/1": { "body": ["openid_logout(${1:OpenID})$2\n$0" ], "description":" openid_logout(+OpenID) is det.\n\n Remove the association of the current session with any OpenID", "prefix":"openid_logout" }, "http/http_openid:openid_server/2": { "body": ["openid_server(${1:Options}, ${2:Request})$3\n$0" ], "description":" openid_server(+Options, +Request)\n\n Realise the OpenID server. The protocol demands a POST request\n here.", "prefix":"openid_server" }, "http/http_openid:openid_server/3": { "body": ["openid_server(${1:OpenIDLogin}, ${2:OpenID}, ${3:Server})$4\n$0" ], "description":" openid_server(?OpenIDLogin, ?OpenID, ?Server) is nondet.\n\n True if OpenIDLogin is the typed id for OpenID verified by\n Server.\n\n @param OpenIDLogin ID as typed by user (canonized)\n @param OpenID ID as verified by server\n @param Server URL of the OpenID server", "prefix":"openid_server" }, "http/http_openid:openid_user/3": { "body": ["openid_user(${1:Request}, ${2:OpenID}, ${3:Options})$4\n$0" ], "description":" openid_user(+Request:http_request, -OpenID:url, +Options) is det.\n\n True if OpenID is a validated OpenID associated with the current\n session. The scenario for which this predicate is designed is to\n allow an HTTP handler that requires a valid login to\n use the transparent code below.\n\n ==\n handler(Request) :-\n openid_user(Request, OpenID, []),\n ...\n ==\n\n If the user is not yet logged on a sequence of redirects will\n follow:\n\n 1. Show a page for login (default: page /openid/login),\n predicate reply_openid_login/1)\n 2. By default, the OpenID login page is a form that is\n submitted to the =verify=, which calls openid_verify/2.\n 3. openid_verify/2 does the following:\n - Find the OpenID claimed identity and server\n - Associate to the OpenID server\n - redirects to the OpenID server for validation\n 4. The OpenID server will redirect here with the authetication\n information. This is handled by openid_authenticate/4.\n\n Options:\n\n * login_url(Login)\n (Local) URL of page to enter OpenID information. Default\n is the handler for openid_login_page/1\n\n @see openid_authenticate/4 produces errors if login is invalid\n or cancelled.", "prefix":"openid_user" }, "http/http_openid:openid_verify/2": { "body": ["openid_verify(${1:Options}, ${2:Request})$3\n$0" ], "description":" openid_verify(+Options, +Request)\n\n Handle the initial login form presented to the user by the\n relying party (consumer). This predicate discovers the OpenID\n server, associates itself with this server and redirects the\n user's browser to the OpenID server, providing the extra\n openid.X name-value pairs. Options is, against the conventions,\n placed in front of the Request to allow for smooth cooperation\n with http_dispatch.pl. Options processes:\n\n * return_to(+URL)\n Specifies where the OpenID provider should return to.\n Normally, that is the current location.\n * trust_root(+URL)\n Specifies the =openid.trust_root= attribute. Defaults to\n the root of the current server (i.e., =|http://host[.port]/|=).\n * realm(+URL)\n Specifies the =openid.realm= attribute. Default is the\n =trust_root=.\n * ax(+Spec)\n Request the exchange of additional attributes from the\n identity provider. See http_ax_attributes/2 for details.\n\n The OpenId server will redirect to the =openid.return_to= URL.\n\n @throws http_reply(moved_temporary(Redirect))", "prefix":"openid_verify" }, "http/http_parameters:http_convert_parameter/4": { "body": [ "http_convert_parameter(${1:Options}, ${2:FieldName}, ${3:ValueIn}, ${4:ValueOut})$5\n$0" ], "description":" http_convert_parameter(+Options, +FieldName, +ValueIn, -ValueOut) is det.\n\n Conversion of an HTTP form value. First tries the multifile hook\n http:convert_parameter/3 and next the built-in checks.\n\n @param Option List as provided with the parameter\n @param FieldName Name of the HTTP field (for better message)\n @param ValueIn Atom value as received from HTTP layer\n @param ValueOut Possibly converted final value\n @error type_error(Type, Value)", "prefix":"http_convert_parameter" }, "http/http_parameters:http_convert_parameters/2": { "body": ["http_convert_parameters(${1:Data}, ${2:Params})$3\n$0" ], "description":" http_convert_parameters(+Data, ?Params) is det.\n http_convert_parameters(+Data, ?Params, :AttrDecl) is det.\n\n Implements the parameter translation of http_parameters/2 or\n http_parameters/3. I.e., http_parameters/2 for a POST request\n can be implemented as:\n\n ==\n http_parameters(Request, Params) :-\n http_read_data(Request, Data, []),\n http_convert_parameters(Data, Params).\n ==", "prefix":"http_convert_parameters" }, "http/http_parameters:http_convert_parameters/3": { "body": [ "http_convert_parameters(${1:Data}, ${2:Params}, ${3:AttrDecl})$4\n$0" ], "description":" http_convert_parameters(+Data, ?Params) is det.\n http_convert_parameters(+Data, ?Params, :AttrDecl) is det.\n\n Implements the parameter translation of http_parameters/2 or\n http_parameters/3. I.e., http_parameters/2 for a POST request\n can be implemented as:\n\n ==\n http_parameters(Request, Params) :-\n http_read_data(Request, Data, []),\n http_convert_parameters(Data, Params).\n ==", "prefix":"http_convert_parameters" }, "http/http_parameters:http_parameters/2": { "body": ["http_parameters(${1:Request}, ${2:Parms})$3\n$0" ], "description":" http_parameters(+Request, ?Parms) is det.\n http_parameters(+Request, ?Parms, :Options) is det.\n\n Get HTTP GET or POST form-data, applying type validation,\n default values, etc. Provided options are:\n\n * attribute_declarations(:Goal)\n Causes the declarations for an attributed named A to be\n fetched using call(Goal, A, Declarations).\n\n * form_data(-Data)\n Return the data read from the GET por POST request as a\n list Name = Value. All data, including name/value pairs\n used for Parms, is unified with Data.\n\n The attribute_declarations hook allows sharing the declaration\n of attribute-properties between many http_parameters/3 calls. In\n this form, the requested attribute takes only one argument and\n the options are acquired by calling the hook. For example:\n\n ==\n ...,\n http_parameters(Request,\n [ sex(Sex)\n ],\n [ attribute_declarations(http_param)\n ]),\n ...\n\n http_param(sex, [ oneof(male, female),\n description('Sex of the person')\n ]).\n ==", "prefix":"http_parameters" }, "http/http_parameters:http_parameters/3": { "body": ["http_parameters(${1:Request}, ${2:Parms}, ${3:Options})$4\n$0" ], "description":" http_parameters(+Request, ?Parms) is det.\n http_parameters(+Request, ?Parms, :Options) is det.\n\n Get HTTP GET or POST form-data, applying type validation,\n default values, etc. Provided options are:\n\n * attribute_declarations(:Goal)\n Causes the declarations for an attributed named A to be\n fetched using call(Goal, A, Declarations).\n\n * form_data(-Data)\n Return the data read from the GET por POST request as a\n list Name = Value. All data, including name/value pairs\n used for Parms, is unified with Data.\n\n The attribute_declarations hook allows sharing the declaration\n of attribute-properties between many http_parameters/3 calls. In\n this form, the requested attribute takes only one argument and\n the options are acquired by calling the hook. For example:\n\n ==\n ...,\n http_parameters(Request,\n [ sex(Sex)\n ],\n [ attribute_declarations(http_param)\n ]),\n ...\n\n http_param(sex, [ oneof(male, female),\n description('Sex of the person')\n ]).\n ==", "prefix":"http_parameters" }, "http/http_path:http_absolute_location/3": { "body": ["http_absolute_location(${1:Spec}, ${2:Path}, ${3:Options})$4\n$0" ], "description":" http_absolute_location(+Spec, -Path, +Options) is det.\n\n Path is the HTTP location for the abstract specification Spec.\n Options:\n\n * relative_to(Base)\n Path is made relative to Base. Default is to generate\n absolute URLs.\n\n @see http_absolute_uri/2 to create a reference that can be\n used on another server.", "prefix":"http_absolute_location" }, "http/http_path:http_absolute_uri/2": { "body": ["http_absolute_uri(${1:Spec}, ${2:URI})$3\n$0" ], "description":" http_absolute_uri(+Spec, -URI) is det.\n\n URI is the absolute (i.e., starting with =|http://|=) URI for\n the abstract specification Spec. Use http_absolute_location/3 to\n create references to locations on the same server.\n\n @tbd Distinguish =http= from =https=", "prefix":"http_absolute_uri" }, "http/http_path:http_clean_location_cache/0": { "body": ["http_clean_location_cache$1\n$0" ], "description":" http_clean_location_cache\n\n HTTP locations resolved through http_absolute_location/3 are\n cached. This predicate wipes the cache. The cache is\n automatically wiped by make/0 and if the setting http:prefix is\n changed.", "prefix":"http_clean_location_cache" }, "http/http_pwp:mime_include/2": { "body":"mime_include(${1:Mime}, ${2:Path})$3\n$0", "description":"[semidet,multifile]mime_include(+Mime, +Path)//.\nHook called to include a link to an HTML resource of type Mime into the HTML head. The Mime type is computed from Path using file_mime_type/2. If the hook fails, two built-in rules for text/css and text/javascript are tried. For example, to include a =.pl= files as a Prolog script, use: \n\n:- multifile\n html_head:mime_include//2.\n\nhtml_head:mime_include(text/'x-prolog', Path) --> !,\n html(script([ type('text/x-prolog'),\n src(Path)\n ], [])).\n\n \n\n", "prefix":"mime_include" }, "http/http_pwp:pwp_handler/2": { "body":"pwp_handler(${1:Options}, ${2:Request})$3\n$0", "description":"pwp_handler(+Options, +Request).\nHandle PWP files. This predicate is defined to create a simple HTTP server from a hierarchy of PWP, HTML and other files. The interface is kept compatible with the library(http/http_dispatch). In the typical usage scenario, one needs to define an http location and a file-search path that is used as the root of the server. E.g., the following declarations create a self-contained web-server for files in /web/pwp/. \n\nuser:file_search_path(pwp, '/web/pwp').\n\n:- http_handler(root(.), pwp_handler([path_alias(pwp)]), [prefix]).\n\n Options include: \n\npath_alias(+Alias): Search for PWP files as Alias(Path). See absolute_file_name/3.\n\nindex(+Index): Name of the directory index (pwp) file. This option may appear multiple times. If no such option is provided, pwp_handler/2 looks for index.pwp.\n\nview(+Boolean): If true (default is false), allow for ?view=source to serve PWP file as source.\n\nindex_hook(:Hook): If a directory has no index-file, pwp_handler/2 calls Hook(PhysicalDir, Options, Request). If this semidet predicate succeeds, the request is considered handled.\n\nhide_extensions(+List): Hide files of the given extensions. The default is to hide .pl files.\n\ndtd(?DTD): DTD to parse the input file with. If unbound, the generated DTD is returned\n\n Errors: permission_error(index, http_location, Location) is raised if the handler resolves to a directory that has no index.\n\nSee also: reply_pwp_page/3\n\n ", "prefix":"pwp_handler" }, "http/http_pwp:reply_pwp_page/3": { "body":"reply_pwp_page(${1:File}, ${2:Options}, ${3:Request})$4\n$0", "description":"reply_pwp_page(:File, +Options, +Request).\nReply a PWP file. This interface is provided to server individual locations from PWP files. Using a PWP file rather than generating the page from Prolog may be desirable because the page contains a lot of text (which is cumbersome to generate from Prolog) or because the maintainer is not familiar with Prolog. Options supported are: \n\nmime_type(+Type): Serve the file using the given mime-type. Default is text/html.\n\nunsafe(+Boolean): Passed to http_safe_file/2 to check for unsafe paths.\n\npwp_module(+Boolean): If true, (default false), process the PWP file in a module constructed from its canonical absolute path. Otherwise, the PWP file is processed in the calling module.\n\n Initial context: \n\nSCRIPT_NAME: Virtual path of the script.\n\nSCRIPT_DIRECTORY: Physical directory where the script lives\n\nQUERY: Var=Value list representing the query-parameters\n\nREMOTE_USER: If access has been authenticated, this is the authenticated user.\n\nREQUEST_METHOD: One of get, post, put or head\n\nCONTENT_TYPE: Content-type provided with HTTP POST and PUT requests\n\nCONTENT_LENGTH: Content-length provided with HTTP POST and PUT requests\n\n While processing the script, the file-search-path pwp includes the current location of the script. I.e., the following will find myprolog in the same directory as where the PWP file resides. \n\n\n\npwp:ask=\"ensure_loaded(pwp(myprolog))\"\n\n See also: pwp_handler/2.\n\nTo be done: complete the initial context, as far as possible from CGI variables. See http://hoohoo.ncsa.illinois.edu/docs/cgi/env.html\n\n ", "prefix":"reply_pwp_page" }, "http/http_server_files:serve_files_in_directory/2": { "body": ["serve_files_in_directory(${1:Alias}, ${2:Request})$3\n$0" ], "description":" serve_files_in_directory(+Alias, +Request)\n\n Serve files from the directory Alias from the path-info from\n Request. This predicate is used together with\n file_search_path/2. Note that multiple clauses for the same\n file_search_path alias can be used to merge files from different\n physical locations onto the same HTTP directory. Note that the\n handler must be declared as =prefix=. Below is an example\n serving images from http:///img/... from the directory\n =http/web/icons=.\n\n ==\n http:location(img, root(img), []).\n user:file_search_path(icons, library('http/web/icons')).\n\n :- http_handler(img(.), serve_files_in_directory(icons), [prefix]).\n ==\n\n This predicate calls http_404/2 if the physical file cannot be\n located. If the requested path-name is unsafe (i.e., points\n outside the hierarchy defines by the file_search_path/2\n declaration), this handlers returns a _403 Forbidden_ page.\n\n @see http_reply_file/3", "prefix":"serve_files_in_directory" }, "http/http_session:http_close_session/1": { "body":"http_close_session(${1:SessionID})$2\n$0", "description":"[det]http_close_session(+SessionID).\nCloses an HTTP session. This predicate can be called from any thread to terminate a session. It uses the broadcast/1 service with the message below. \n\nhttp_session(end(SessionId, Peer))\n\n The broadcast is done before the session data is destroyed and the listen-handlers are executed in context of the session that is being closed. Here is an example that destroys a Prolog thread that is associated to a thread: \n\n\n\n:- listen(http_session(end(SessionId, _Peer)),\n kill_session_thread(SessionID)).\n\nkill_session_thread(SessionID) :-\n http_session_data(thread(ThreadID)),\n thread_signal(ThreadID, throw(session_closed)).\n\n Succeed without any effect if SessionID does not refer to an active session. \n\nIf http_close_session/1 is called from a handler operating in the current session and the CGI stream is still in state header, this predicate emits a Set-Cookie to expire the cookie. \n\nErrors: type_error(atom, SessionID)\n\nSee also: listen/2 for acting upon closed sessions\n\n ", "prefix":"http_close_session" }, "http/http_session:http_current_session/2": { "body":"http_current_session(${1:SessionID}, ${2:Data})$3\n$0", "description":"[nondet]http_current_session(?SessionID, ?Data).\nEnumerate the current sessions and associated data. There are two Pseudo data elements: idle(Seconds): Session has been idle for Seconds.\n\npeer(Peer): Peer of the connection.\n\n ", "prefix":"http_current_session" }, "http/http_session:http_in_session/1": { "body":"http_in_session(${1:SessionId})$2\n$0", "description":"[semidet]http_in_session(-SessionId).\nTrue if SessionId is an identifier for the current session. The current session is extracted from session(ID) from the current HTTP request (see http_current_request/1). The value is cached in a backtrackable global variable http_session_id. Using a backtrackable global variable is safe because continuous worker threads use a failure driven loop and spawned threads start without any global variables. This variable can be set from the commandline to fake running a goal from the commandline in the context of a session. See also: http_session_id/1\n\n ", "prefix":"http_in_session" }, "http/http_session:http_open_session/2": { "body":"http_open_session(${1:SessionID}, ${2:Options})$3\n$0", "description":"[det]http_open_session(-SessionID, +Options).\nEstablish a new session. This is normally used if the create option is set to noauto. Options: renew(+Boolean): If true (default false) and the current request is part of a session, generate a new session-id. By default, this predicate returns the current session as obtained with http_in_session/1.\n\n Errors: permission_error(open, http_session, CGI) if this call is used after closing the CGI header.\n\nSee also: - http_set_session_options/1 to control the create option. - http_close_session/1 for closing the session.\n\n ", "prefix":"http_open_session" }, "http/http_session:http_reply_from_files/3": { "body":"http_reply_from_files(${1:Dir}, ${2:Options}, ${3:Request})$4\n$0", "description":"http_reply_from_files(+Dir, +Options, +Request).\nHTTP handler that serves files from the directory Dir. This handler uses http_reply_file/3 to reply plain files. If the request resolves to a directory, it uses the option indexes to locate an index file (see below) or uses http_reply_dirindex/3 to create a listing of the directory. Options: \n\nindexes(+List): List of files tried to find an index for a directory. The default is ['index.html'].\n\n Note that this handler must be tagged as a prefix handler (see http_handler/3 and module introduction). This also implies that it is possible to override more specific locations in the hierarchy using http_handler/3 with a longer path-specifier.\n\nDir is either a directory or an path-specification as used by absolute_file_name/3. This option provides great flexibility in (re-)locating the physical files and allows merging the files of multiple physical locations into one web-hierarchy by using multiple user:file_search_path/2 clauses that define the same alias. See also: The hookable predicate file_mime_type/2 is used to determine the Content-type from the file name.\n\n ", "prefix":"http_reply_from_files" }, "http/http_session:http_session_assert/1": { "body":"http_session_assert(${1:Data})$2\n$0", "description":"[det]http_session_assert(+Data).\n", "prefix":"http_session_assert" }, "http/http_session:http_session_asserta/1": { "body":"http_session_asserta(${1:Data})$2\n$0", "description":"[det]http_session_asserta(+Data).\n", "prefix":"http_session_asserta" }, "http/http_session:http_session_cookie/1": { "body":"http_session_cookie(${1:Cookie})$2\n$0", "description":"[det]http_session_cookie(-Cookie).\nGenerate a random cookie that can be used by a browser to identify the current session. The cookie has the format XXXX-XXXX-XXXX-XXXX[.], where XXXX are random hexadecimal numbers and [.] is the optionally added routing information.", "prefix":"http_session_cookie" }, "http/http_session:http_session_data/1": { "body":"http_session_data(${1:Data})$2\n$0", "description":"[nondet]http_session_data(?Data).\nTrue if Data is associated using http_session_assert/1 to the current HTTP session. Errors: existence_error(http_session,_)\n\n ", "prefix":"http_session_data" }, "http/http_session:http_session_id/1": { "body":"http_session_id(${1:SessionId})$2\n$0", "description":"[det]http_session_id(-SessionId).\nTrue if SessionId is an identifier for the current session. SessionId is an atom. Errors: existence_error(http_session, _)\n\nSee also: http_in_session/1 for a version that fails if there is no session.\n\n ", "prefix":"http_session_id" }, "http/http_session:http_session_option/1": { "body":"http_session_option(${1:Option})$2\n$0", "description":"[nondet]http_session_option(?Option).\nTrue if Option is a current option of the session system.", "prefix":"http_session_option" }, "http/http_session:http_session_retract/1": { "body":"http_session_retract(${1:Data})$2\n$0", "description":"[nondet]http_session_retract(?Data).\n", "prefix":"http_session_retract" }, "http/http_session:http_session_retractall/1": { "body":"http_session_retractall(${1:Data})$2\n$0", "description":"[det]http_session_retractall(?Data).\nVersions of assert/1, retract/1 and retractall/1 that associate data with the current HTTP session.", "prefix":"http_session_retractall" }, "http/http_session:http_set_session/1": { "body":"http_set_session(${1:Setting})$2\n$0", "description":"[det]http_set_session(Setting).\n", "prefix":"http_set_session" }, "http/http_session:http_set_session/2": { "body":"http_set_session(${1:SessionId}, ${2:Setting})$3\n$0", "description":"[det]http_set_session(SessionId, Setting).\nOverrule a setting for the current or specified session. Currently, the only setting that can be overruled is timeout. Errors: permission_error(set, http_session, Setting) if setting a setting that is not supported on per-session basis.\n\n ", "prefix":"http_set_session" }, "http/http_session:http_set_session_options/1": { "body":"http_set_session_options(${1:Options})$2\n$0", "description":"[det]http_set_session_options(+Options).\nSet options for the session library. Provided options are: timeout(+Seconds): Session timeout in seconds. Default is 600 (10 min). A timeout of 0 (zero) disables timeout.\n\ncookie(+Cookiekname): Name to use for the cookie to identify the session. Default swipl_session.\n\npath(+Path): Path to which the cookie is associated. Default is /. Cookies are only sent if the HTTP request path is a refinement of Path.\n\nroute(+Route): Set the route name. Default is the unqualified hostname. To cancel adding a route, use the empty atom. See route/1.\n\nenabled(+Boolean): Enable/disable session management. Sesion management is enabled by default after loading this file.\n\ncreate(+Atom): Defines when a session is created. This is one of auto (default), which creates a session if there is a request whose path matches the defined session path or noauto, in which cases sessions are only created by calling http_open_session/2 explicitely.\n\nproxy_enabled(+Boolean): Enable/disable proxy session management. Proxy session management associates the originating IP address of the client to the session rather than the proxy IP address. Default is false.\n\n ", "prefix":"http_set_session_options" }, "http/http_stream:cgi_discard/1": { "body": ["cgi_discard(${1:CGIStream})$2\n$0" ], "description":" cgi_discard(+CGIStream) is det.\n\n Discard content produced sofar. It sets the state property to\n =discarded=, causing close to omit the writing the data. This\n must be used for an alternate output (e.g. an error page) if the\n page generator fails.", "prefix":"cgi_discard" }, "http/http_stream:cgi_open/4": { "body": [ "cgi_open(${1:OutStream}, ${2:CGIStream}, ${3:Hook}, ${4:Options})$5\n$0" ], "description":" cgi_open(+OutStream, -CGIStream, :Hook, +Options) is det.\n\n Process CGI output. OutStream is normally the socket returning\n data to the HTTP client. CGIStream is the stream the (Prolog)\n code writes to. The CGIStream provides the following functions:\n\n * At the end of the header, it calls Hook using\n call(Hook, header, Stream), where Stream is a stream holding\n the buffered header.\n\n * If the stream is closed, it calls Hook using\n call(Hook, data, Stream), where Stream holds the buffered\n data.\n\n The stream calls Hook, adding the event and CGIStream to the\n closure. Defined events are:\n\n * header\n Called if the header is complete. Typically it uses\n cgi_property/2 to extract the collected header and combines\n these with the request and policies to decide on encoding,\n transfer-encoding, connection parameters and the complete\n header (as a Prolog term). Typically it uses cgi_set/2 to\n associate these with the stream.\n\n * send_header\n Called if the HTTP header must be sent. This is immediately\n after setting the transfer encoding to =chunked= or when the\n CGI stream is closed. Typically it requests the current\n header, optionally the content-length and sends the header\n to the original (client) stream.\n\n * close\n Called from close/1 on the CGI stream after everything is\n complete.\n\n The predicates cgi_property/2 and cgi_set/2 can be used to\n control the stream and store status info. Terms are stored as\n Prolog records and can thus be transferred between threads.", "prefix":"cgi_open" }, "http/http_stream:cgi_property/2": { "body": ["cgi_property(${1:CGIStream}, ${2:Property})$3\n$0" ], "description":" cgi_property(+CGIStream, ?Property) is det.\n\n Inquire the status of the CGI stream. Defined properties are:\n\n * request(-Term)\n The original request\n * header(-Term)\n Term is the header term as registered using cgi_set/2\n * client(-Stream)\n Stream is the original output stream used to create\n this stream.\n * thread(-ThreadID)\n ThreadID is the identifier of the `owning thread'\n * transfer_encoding(-Tranfer)\n One of =chunked= or =none=.\n * connection(-Connection)\n One of =Keep-Alive= or =close=\n * content_length(-ContentLength)\n Total byte-size of the content. Available in the close\n handler if the transfer_encoding is =none=.\n * header_codes(-Codes)\n Codes represents the header collected. Available in the\n header handler.\n * state(-State)\n One of =header=, =data= or =discarded=\n * id(-ID)\n Request sequence number. This number is guaranteed to be\n unique.", "prefix":"cgi_property" }, "http/http_stream:cgi_set/2": { "body": ["cgi_set(${1:CGIStream}, ${2:Property})$3\n$0" ], "description":" cgi_set(+CGIStream, ?Property) is det.\n\n Change one of the properies. Supported properties are:\n\n * request(+Term)\n Associate a request to the stream.\n * header(+Term)\n Register a reply header. This header is normally retrieved\n from the =send_header= hook to send the reply header to the\n client.\n * connection(-Connection)\n One of =Keep-Alive= or =close=.\n * transfer_encoding(-Tranfer)\n One of =chunked= or =none=. Initially set to =none=. When\n switching to =chunked= from the =header= hook, it calls the\n =send_header= hook and if there is data queed this is send\n as first chunk. Each subsequent write to the CGI stream\n emits a chunk.", "prefix":"cgi_set" }, "http/http_stream:cgi_statistics/1": { "body": ["cgi_statistics(${1:Term})$2\n$0" ], "description":" cgi_statistics(?Term)\n\n Return statistics on the CGI stream subsystem. Currently defined\n statistics are:\n\n * requests(-Integer)\n Total number of requests processed\n * bytes_sent(-Integer)\n Total number of bytes sent.", "prefix":"cgi_statistics" }, "http/http_stream:http_chunked_open/3": { "body": [ "http_chunked_open(${1:RawStream}, ${2:DataStream}, ${3:Options})$4\n$0" ], "description":" http_chunked_open(+RawStream, -DataStream, +Options) is det.\n\n Create a stream to realise HTTP chunked encoding or decoding.\n The technique is similar to library(zlib), using a Prolog stream\n as a filter on another stream. Options:\n\n * close_parent(+Bool)\n If =true= (default =false=), the parent stream is closed\n if DataStream is closed.\n\n * max_chunk_size(+PosInt)\n Define the maximum size of a chunk. Default is the\n default buffer size of fully buffered streams (4096).\n Larger values may improve throughput. It is also\n allowed to use =|set_stream(DataStream, buffer(line))|=\n on the data stream to get line-buffered output. See\n set_stream/2 for details. Switching buffering to =false=\n is supported.\n\n Here is example code to write a chunked data to a stream\n\n ==\n http_chunked_open(Out, S, []),\n format(S, 'Hello world~n', []),\n close(S).\n ==\n\n If a stream is known to contain chunked data, we can extract\n this data using\n\n ==\n http_chunked_open(In, S, []),\n read_stream_to_codes(S, Codes),\n close(S).\n ==\n\n The current implementation does not generate chunked extensions\n or an HTTP trailer. If such extensions appear on the input they\n are silently ignored. This is compatible with the HTTP 1.1\n specifications. Although a filtering stream is an excellent\n mechanism for encoding and decoding the core chunked protocol,\n it does not well support out-of-band data.\n\n After http_chunked_open/3, the encoding of DataStream is the\n same as the encoding of RawStream, while the encoding of\n RawStream is =octet=, the only value allowed for HTTP chunked\n streams. Closing the DataStream restores the old encoding on\n RawStream.\n\n @error io_error(read, Stream) where the message context provides\n an indication of the problem. This error is raised if\n the input is not valid HTTP chunked data.", "prefix":"http_chunked_open" }, "http/http_stream:is_cgi_stream/1": { "body": ["is_cgi_stream(${1:Stream})$2\n$0" ], "description":" is_cgi_stream(+Stream) is semidet.\n\n True if Stream is a CGI stream created using cgi_open/4.", "prefix":"is_cgi_stream" }, "http/http_stream:multipart_open/3": { "body": ["multipart_open(${1:Stream}, ${2:DataSttream}, ${3:Options})$4\n$0" ], "description":" multipart_open(+Stream, -DataSttream, +Options) is det.\n\n DataStream is a stream that signals `end_of_file` if the\n multipart _boundary_ is encountered. The stream can be reset to\n read the next part using multipart_open_next/1. Options:\n\n - close_parent(+Boolean)\n Close Stream if DataStream is closed.\n - boundary(+Text)\n Define the boundary string. Text is an atom, string, code or\n character list.\n\n All parts of a multipart input can be read using the following\n skeleton:\n\n ==\n process_multipart(Stream) :-\n multipart_open(Stream, DataStream, [boundary(...)]),\n process_parts(DataStream).\n\n process_parts(DataStream) :-\n process_part(DataStream),\n ( multipart_open_next(DataStream)\n -> process_parts(DataStream)\n ; close(DataStream)\n ).\n ==\n\n @license The multipart parser contains code licensed under the\n MIT license, based on _node-formidable_ by Felix Geisendoerfer\n and Igor Afonov.", "prefix":"multipart_open" }, "http/http_stream:multipart_open_next/1": { "body": ["multipart_open_next(${1:DataStream})$2\n$0" ], "description":" multipart_open_next(+DataStream) is semidet.\n\n Prepare DataStream to read the next part from the multipart\n input data. Succeeds if a next part exists and fails if the last\n part was processed. Note that it is mandatory to read each part\n up to the end_of_file.", "prefix":"multipart_open_next" }, "http/http_stream:stream_range_open/3": { "body": [ "stream_range_open(${1:RawStream}, ${2:DataStream}, ${3:Options})$4\n$0" ], "description":" stream_range_open(+RawStream, -DataStream, +Options) is det.\n\n DataStream is a stream whose size is defined by the option\n size(ContentLength). Closing DataStream does not close\n RawStream. Options processed:\n\n - size(+Bytes)\n Number of bytes represented by the main stream.\n - onclose(:Closure)\n Calls call(Closure, RawStream, BytesLeft) when DataStream is\n closed. BytesLeft is the number of bytes of the range stream\n that have *not* been read, i.e., 0 (zero) if all data has been\n read from the stream when the range is closed. This was\n introduced for supporting Keep-alive in http_open/3 to\n reschedule the original stream for a new request if the data\n of the previous request was processed.", "prefix":"stream_range_open" }, "http/http_unix_daemon:http_daemon/0": { "body": ["http_daemon$1\n$0" ], "description":" http_daemon\n\n Start the HTTP server as a daemon process. This predicate\n processes the commandline arguments below. Commandline arguments\n that specify servers are processed in the order they appear\n using the following schema:\n\n 1. Arguments that act as default for all servers.\n 2. =|--http=Spec|= or =|--https=Spec|= is followed by\n arguments for that server until the next =|--http=Spec|=\n or =|--https=Spec|= or the end of the options.\n 3. If no =|--http=Spec|= or =|--https=Spec|= appears, one\n HTTP server is created from the specified parameters.\n\n Examples:\n\n ==\n --workers=10 --http --https\n --http=8080 --https=8443\n --http=localhost:8080 --workers=1 --https=8443 --workers=25\n ==\n\n $ --port=Port :\n Start HTTP server at Port. It requires root permission and the\n option =|--user=User|= to open ports below 1000. The default\n port is 80. If =|--https|= is used, the default port is 443.\n\n $ --ip=IP :\n Only listen to the given IP address. Typically used as\n =|--ip=localhost|= to restrict access to connections from\n _localhost_ if the server itself is behind an (Apache)\n proxy server running on the same host.\n\n $ --debug=Topic :\n Enable debugging Topic. See debug/3.\n\n $ --syslog=Ident :\n Write debug messages to the syslog daemon using Ident\n\n $ --user=User :\n When started as root to open a port below 1000, this option\n must be provided to switch to the target user for operating\n the server. The following actions are performed as root, i.e.,\n _before_ switching to User:\n\n - open the socket(s)\n - write the pidfile\n - setup syslog interaction\n - Read the certificate, key and password file (=|--pwfile=File|=)\n\n $ --group=Group :\n May be used in addition to =|--user|=. If omitted, the login\n group of the target user is used.\n\n $ --pidfile=File :\n Write the PID of the daemon process to File.\n\n $ --output=File :\n Send output of the process to File. By default, all\n Prolog console output is discarded.\n\n $ --fork[=Bool] :\n If given as =|--no-fork|= or =|--fork=false|=, the process\n runs in the foreground.\n\n $ --http[=(Bool|Port|BindTo:Port)] :\n Create a plain HTTP server. If the argument is missing or\n =true=, create at the specified or default address. Else\n use the given port and interface. Thus, =|--http|= creates\n a server at port 80, =|--http=8080|= creates one at port\n 8080 and =|--http=localhost:8080|= creates one at port\n 8080 that is only accessible from `localhost`.\n\n $ --https[=(Bool|Port|BindTo:Port)] :\n As =|--http|=, but creates an HTTPS server.\n Use =|--certfile|=, =|--keyfile|=, =|-pwfile|=,\n =|--password|= and =|--cipherlist|= to configure SSL for\n this server.\n\n $ --certfile=File :\n The server certificate for HTTPS.\n\n $ --keyfile=File :\n The server private key for HTTPS.\n\n $ --pwfile=File :\n File holding the password for accessing the private key. This\n is preferred over using =|--password=PW|= as it allows using\n file protection to avoid leaking the password. The file is\n read _before_ the server drops privileges when started with\n the =|--user|= option.\n\n $ --password=PW :\n The password for accessing the private key. See also `--pwfile`.\n\n $ --cipherlist=Ciphers :\n One or more cipher strings separated by colons. See the OpenSSL\n documentation for more information. Default is `DEFAULT`.\n\n $ --interactive[=Bool] :\n If =true= (default =false=) implies =|--no-fork|= and presents\n the Prolog toplevel after starting the server.\n\n $ --gtrace=[Bool] :\n Use the debugger to trace http_daemon/1.\n\n $ --sighup=Action :\n Action to perform on =|kill -HUP |=. Default is `reload`\n (running make/0). Alternative is `quit`, stopping the server.\n\n Other options are converted by argv_options/3 and passed to\n http_server/1. For example, this allows for:\n\n $ --workers=Count :\n Set the number of workers for the multi-threaded server.\n\n http_daemon/0 is defined as below. The start code for a specific\n server can use this as a starting point, for example for specifying\n defaults.\n\n ```\n http_daemon :-\n current_prolog_flag(argv, Argv),\n argv_options(Argv, _RestArgv, Options),\n http_daemon(Options).\n ```\n\n @see http_daemon/1", "prefix":"http_daemon" }, "http/http_unix_daemon:http_daemon/1": { "body": ["http_daemon(${1:Options})$2\n$0" ], "description":" http_daemon(+Options)\n\n Start the HTTP server as a daemon process. This predicate processes\n a Prolog option list. It is normally called from http_daemon/0,\n which derives the option list from the command line arguments.\n\n Error handling depends on whether or not interactive(true) is in\n effect. If so, the error is printed before entering the toplevel. In\n non-interactive mode this predicate calls halt(1).", "prefix":"http_daemon" }, "http/httpd_wrapper:http_current_request/1": { "body": ["http_current_request(${1:Request})$2\n$0" ], "description":" http_current_request(-Request) is semidet.\n\n Returns the HTTP request currently being processed. Fails\n silently if there is no current request. This typically happens\n if a goal is run outside the HTTP server context.", "prefix":"http_current_request" }, "http/httpd_wrapper:http_peer/2": { "body": ["http_peer(${1:Request}, ${2:PeerIP})$3\n$0" ], "description":" http_peer(+Request, -PeerIP:atom) is semidet.\n\n True when PeerIP is the IP address of the connection peer. If the\n connection is established via a proxy or CDN we try to find the\n initiating peer. Currently supports:\n\n - =Fastly-client-ip=\n - =X-forwarded-for=\n - =X-real-ip=\n - Direct connections", "prefix":"http_peer" }, "http/httpd_wrapper:http_relative_path/2": { "body": ["http_relative_path(${1:AbsPath}, ${2:RelPath})$3\n$0" ], "description":" http_relative_path(+AbsPath, -RelPath) is det.\n\n Convert an absolute path (without host, fragment or search) into\n a path relative to the current page. This call is intended to\n create reusable components returning relative paths for easier\n support of reverse proxies.", "prefix":"http_relative_path" }, "http/httpd_wrapper:http_send_header/1": { "body": ["http_send_header(${1:Header})$2\n$0" ], "description":" http_send_header(+Header)\n\n This API provides an alternative for writing the header field as\n a CGI header. Header has the format Name(Value), as produced by\n http_read_header/2.\n\n @deprecated Use CGI lines instead", "prefix":"http_send_header" }, "http/httpd_wrapper:http_spawned/1": { "body": ["http_spawned(${1:ThreadId})$2\n$0" ], "description":" http_spawned(+ThreadId)\n\n Internal use only. Indicate that the request is handed to thread\n ThreadId.", "prefix":"http_spawned" }, "http/httpd_wrapper:http_wrap_spawned/3": { "body": ["http_wrap_spawned(${1:Goal}, ${2:Request}, ${3:Close})$4\n$0" ], "description":" http_wrap_spawned(:Goal, -Request, -Close) is det.\n\n Internal use only. Helper for wrapping the handler for\n http_spawn/2.\n\n @see http_spawned/1, http_spawn/2.", "prefix":"http_wrap_spawned" }, "http/httpd_wrapper:http_wrapper/5": { "body": [ "http_wrapper(${1:Goal}, ${2:In}, ${3:Out}, ${4:Close}, ${5:Options})$6\n$0" ], "description":" http_wrapper(:Goal, +In, +Out, -Close, +Options) is det.\n\n Simple wrapper to read and decode an HTTP header from `In', call\n :Goal while watching for exceptions and send the result to the\n stream `Out'.\n\n The goal is assumed to write the reply to =current_output=\n preceeded by an HTTP header, closed by a blank line. The header\n *must* contain a Content-type: line. It may optionally\n contain a line =|Transfer-encoding: chunked|= to request chunked\n encoding.\n\n Options:\n\n * request(-Request)\n Return the full request to the caller\n * peer(+Peer)\n IP address of client\n\n @param Close Unified to one of =close=, =|Keep-Alive|= or\n spawned(ThreadId).", "prefix":"http_wrapper" }, "http/hub:current_hub/2": { "body":"current_hub(${1:Name}, ${2:Hub})$3\n$0", "description":"[nondet]current_hub(?Name, ?Hub).\nTrue when there exists a hub Hub with Name.", "prefix":"current_hub" }, "http/hub:hub_add/3": { "body":"hub_add(${1:Hub}, ${2:WebSocket}, ${3:Id})$4\n$0", "description":"[det]hub_add(+Hub, +WebSocket, ?Id).\nAdd a WebSocket to the hub. Id is used to identify this user. It may be provided (as a ground term) or is generated as a UUID.", "prefix":"hub_add" }, "http/hub:hub_broadcast/2": { "body":"hub_broadcast(${1:Hub}, ${2:Message})$3\n$0", "description":"[det]hub_broadcast(+Hub, +Message).\n", "prefix":"hub_broadcast" }, "http/hub:hub_broadcast/3": { "body":"hub_broadcast(${1:Hub}, ${2:Message}, ${3:Condition})$4\n$0", "description":"[det]hub_broadcast(+Hub, +Message, :Condition).\nSend Message to all websockets associated with Hub for which call(Condition, Id) succeeds. Note that this process is asynchronous: this predicate returns immediately after putting all requests in a broadcast queue. If a message cannot be delivered due to a network error, the hub is informed through io_error/3.", "prefix":"hub_broadcast" }, "http/hub:hub_create/3": { "body":"hub_create(${1:Name}, ${2:Hub}, ${3:Options})$4\n$0", "description":"[det]hub_create(+Name, -Hub, +Options).\nCreate a new hub. Hub is a dict containing the following public information: Hub . name(): The name of the hub (the Name argument)\n\nqueues() . event(): Message queue to which the hub thread(s) can listen.\n\n After creating a hub, the application normally creates a thread that listens to Hub.queues.event and exposes some mechanisms to establish websockets and add them to the hub using hub_add/3. \n\nSee also: http_upgrade_to_websocket/3 establishes a websocket from the SWI-Prolog webserver.\n\n ", "prefix":"hub_create" }, "http/hub:hub_send/2": { "body":"hub_send(${1:ClientId}, ${2:Message})$3\n$0", "description":"[semidet]hub_send(+ClientId, +Message).\nSend message to the indicated ClientId. Fails silently if ClientId does not exist. Message is either a single message (as accepted by ws_send/2) or a list of such messages. ", "prefix":"hub_send" }, "http/hub:ws_property/2": { "body":"ws_property(${1:WebSocket}, ${2:Property})$3\n$0", "description":"[nondet]ws_property(+WebSocket, ?Property).\nTrue if Property is a property WebSocket. Defined properties are: subprotocol(Protocol): Protocol is the negotiated subprotocol. This is typically set as a property of the websocket by ws_open/3.\n\n ", "prefix":"ws_property" }, "http/javascript:javascript/4": { "body": ["javascript(${1:Content}, ${2:Vars}, ${3:VarDict}, ${4:DOM})$5\n$0" ], "description":" javascript(+Content, +Vars, +VarDict, -DOM) is det.\n\n Quasi quotation parser for JavaScript that allows for embedding\n Prolog variables to substitude _identifiers_ in the JavaScript\n snippet. Parameterizing a JavaScript string is achieved using\n the JavaScript `+` operator, which results in concatenation at\n the client side.\n\n ==\n ...,\n js_script({|javascript(Id, Config)||\n $(document).ready(function() {\n $(\"#\"+Id).tagit(Config);\n });\n |}),\n ...\n ==\n\n The current implementation tokenizes the JavaScript input and\n yields syntax errors on unterminated comments, strings, etc. No\n further parsing is implemented, which makes it possible to\n produce syntactically incorrect and partial JavaScript. Future\n versions are likely to include a full parser, generating syntax\n errors.\n\n The parser produces a term `\\List`, which is suitable for\n js_script//1 and html//1. Embedded variables are mapped to\n `\\js_expression(Var)`, while the remaining text is mapped to\n atoms.\n\n @tbd Implement a full JavaScript parser. Users should _not_\n rely on the ability to generate partial JavaScript\n snippets.", "prefix":"javascript" }, "http/javascript:js_args/3": { "body": ["js_args(${1:'Param1'}, ${2:'Param2'}, ${3:'Param3'})$4\n$0" ], "description":"js_args('Param1','Param2','Param3')", "prefix":"js_args" }, "http/js_write:javascript/4": { "body":"javascript(${1:Content}, ${2:Vars}, ${3:VarDict}, ${4:DOM})$5\n$0", "description":"[det]javascript(+Content, +Vars, +VarDict, -DOM).\nQuasi quotation parser for JavaScript that allows for embedding Prolog variables to substitude identifiers in the JavaScript snippet. Parameterizing a JavaScript string is achieved using the JavaScript + operator, which results in concatenation at the client side. \n\n ...,\n js_script({|javascript(Id, Config)||\n $(document).ready(function() {\n $(\"#\"+Id).tagit(Config);\n });\n |}),\n ...\n\n The current implementation tokenizes the JavaScript input and yields syntax errors on unterminated comments, strings, etc. No further parsing is implemented, which makes it possible to produce syntactically incorrect and partial JavaScript. Future versions are likely to include a full parser, generating syntax errors. \n\nThe parser produces a term \\List, which is suitable for js_script/3 and html/3. Embedded variables are mapped to \\js_expression(Var), while the remaining text is mapped to atoms. \n\nTo be done: Implement a full JavaScript parser. Users should not rely on the ability to generate partial JavaScript snippets.\n\n ", "prefix":"javascript" }, "http/js_write:js_arg/1": { "body":"js_arg(${1:Expression})$2\n$0", "description":"[semidet]js_arg(+Expression)//.\nSame as js_expression/3, but fails if Expression is invalid, where js_expression/3 raises an error. deprecated: New code should use js_expression/3.\n\n ", "prefix":"js_arg" }, "http/js_write:js_arg_list/1": { "body":"js_arg_list(${1:Expressions})$2\n$0", "description":"[det]js_arg_list(+Expressions:list)//.\nWrite javascript (function) arguments. This writes \"(\", Arg, ..., \")\". See js_expression/3 for valid argument values.", "prefix":"js_arg_list" }, "http/js_write:js_call/1": { "body":"js_call(${1:Term})$2\n$0", "description":"[det]js_call(+Term)//.\nEmit a call to a Javascript function. The Prolog functor is the name of the function. The arguments are converted from Prolog to JavaScript using js_arg_list/3. Please not that Prolog functors can be quoted atom and thus the following is legal: \n\n ...\n html(script(type('text/javascript'),\n [ \\js_call('x.y.z'(hello, 42)\n ]),\n\n ", "prefix":"js_call" }, "http/js_write:js_expression/1": { "body":"js_expression(${1:Expression})$2\n$0", "description":"[det]js_expression(+Expression)//.\nEmit a single JSON argument. Expression is one of: Variable: Emitted as Javascript null\n\nList: Produces a Javascript list, where each element is processed by this library.\n\nobject(Attributes): Where Attributes is a Key-Value list where each pair can be written as Key-Value, Key=Value or Key(Value), accomodating all common constructs for this used in Prolog. $ { K:V, ... } Same as object(Attributes), providing a more JavaScript-like syntax. This may be useful if the object appears literally in the source-code, but is generally less friendlyto produce as a result from a computation.\n\nDict: Emit a dict as a JSON object using json_write_dict/3.\n\njson(Term): Emits a term using json_write/3.\n\n@(Atom): Emits these constants without quotes. Normally used for the symbols true, false and null, but can also be use for emitting JavaScript symbols (i.e. function- or variable names).\n\nNumber: Emited literally\n\nsymbol(Atom): Synonym for @(Atom). Deprecated.\n\nAtom or String: Emitted as quoted JavaScript string.\n\n ", "prefix":"js_expression" }, "http/js_write:js_new/2": { "body":"js_new(${1:Id}, ${2:Term})$3\n$0", "description":"[det]js_new(+Id, +Term)//.\nEmit a call to a Javascript object declaration. This is the same as: \n\n['var ', Id, ' = new ', \\js_call(Term)]\n\n ", "prefix":"js_new" }, "http/js_write:js_script/1": { "body":"js_script(${1:Content})$2\n$0", "description":"[det]js_script(+Content)//.\nGenerate a JavaScript script element with the given content.", "prefix":"js_script" }, "http/json:atom_json_dict/3": { "body": ["atom_json_dict(${1:Atom}, ${2:JSONDict}, ${3:Options})$4\n$0" ], "description":" atom_json_dict(+Atom, -JSONDict, +Options) is det.\n atom_json_dict(-Text, +JSONDict, +Options) is det.\n\n Convert between textual representation and a JSON term\n represented as a dict. Options are as for json_read/3.\n In _write_ mode, the addtional option\n\n * as(Type)\n defines the output type, which is one of =atom=,\n =string= or =codes=.", "prefix":"atom_json_dict" }, "http/json:atom_json_term/3": { "body": ["atom_json_term(${1:Atom}, ${2:JSONTerm}, ${3:Options})$4\n$0" ], "description":" atom_json_term(?Atom, ?JSONTerm, +Options) is det.\n\n Convert between textual representation and a JSON term. In\n _write_ mode (JSONTerm to Atom), the option\n\n * as(Type)\n defines the output type, which is one of =atom= (default),\n =string=, =codes= or =chars=.", "prefix":"atom_json_term" }, "http/json:is_json_term/1": { "body": ["is_json_term(${1:Term})$2\n$0" ], "description":" is_json_term(@Term) is semidet.\n is_json_term(@Term, +Options) is semidet.\n\n True if Term is a json term. Options are the same as for\n json_read/2, defining the Prolog representation for the JSON\n =true=, =false= and =null= constants.", "prefix":"is_json_term" }, "http/json:is_json_term/2": { "body": ["is_json_term(${1:Term}, ${2:Options})$3\n$0" ], "description":" is_json_term(@Term) is semidet.\n is_json_term(@Term, +Options) is semidet.\n\n True if Term is a json term. Options are the same as for\n json_read/2, defining the Prolog representation for the JSON\n =true=, =false= and =null= constants.", "prefix":"is_json_term" }, "http/json:json_read/2": { "body": ["json_read(${1:Stream}, ${2:Term})$3\n$0" ], "description":" json_read(+Stream, -Term) is det.\n json_read(+Stream, -Term, +Options) is det.\n\n Read next JSON value from Stream into a Prolog term. The\n canonical representation for Term is:\n\n * A JSON object is mapped to a term json(NameValueList), where\n NameValueList is a list of Name=Value. Name is an atom\n created from the JSON string.\n\n * A JSON array is mapped to a Prolog list of JSON values.\n\n * A JSON string is mapped to a Prolog atom\n\n * A JSON number is mapped to a Prolog number\n\n * The JSON constants =true= and =false= are mapped -like JPL-\n to @(true) and @(false).\n\n * The JSON constant =null= is mapped to the Prolog term\n @(null)\n\n Here is a complete example in JSON and its corresponding Prolog\n term.\n\n ==\n { \"name\":\"Demo term\",\n \"created\": {\n \"day\":null,\n \"month\":\"December\",\n \"year\":2007\n },\n \"confirmed\":true,\n \"members\":[1,2,3]\n }\n ==\n\n ==\n json([ name='Demo term',\n created=json([day= @null, month='December', year=2007]),\n confirmed= @true,\n members=[1, 2, 3]\n ])\n ==\n\n The following options are processed:\n\n * null(+NullTerm)\n Term used to represent JSON =null=. Default @(null)\n * true(+TrueTerm)\n Term used to represent JSON =true=. Default @(true)\n * false(+FalseTerm)\n Term used to represent JSON =false=. Default @(false)\n * value_string_as(+Type)\n Prolog type used for strings used as value. Default\n is =atom=. The alternative is =string=, producing a\n packed string object. Please note that =codes= or\n =chars= would produce ambiguous output and is therefore\n not supported.\n\n If json_read/3 encounters end-of-file before any real data it\n binds Term to the term @(end_of_file).\n\n @see json_read_dict/3 to read a JSON term using the version 7\n extended data types.", "prefix":"json_read" }, "http/json:json_read/3": { "body": ["json_read(${1:Stream}, ${2:Term}, ${3:Options})$4\n$0" ], "description":" json_read(+Stream, -Term) is det.\n json_read(+Stream, -Term, +Options) is det.\n\n Read next JSON value from Stream into a Prolog term. The\n canonical representation for Term is:\n\n * A JSON object is mapped to a term json(NameValueList), where\n NameValueList is a list of Name=Value. Name is an atom\n created from the JSON string.\n\n * A JSON array is mapped to a Prolog list of JSON values.\n\n * A JSON string is mapped to a Prolog atom\n\n * A JSON number is mapped to a Prolog number\n\n * The JSON constants =true= and =false= are mapped -like JPL-\n to @(true) and @(false).\n\n * The JSON constant =null= is mapped to the Prolog term\n @(null)\n\n Here is a complete example in JSON and its corresponding Prolog\n term.\n\n ==\n { \"name\":\"Demo term\",\n \"created\": {\n \"day\":null,\n \"month\":\"December\",\n \"year\":2007\n },\n \"confirmed\":true,\n \"members\":[1,2,3]\n }\n ==\n\n ==\n json([ name='Demo term',\n created=json([day= @null, month='December', year=2007]),\n confirmed= @true,\n members=[1, 2, 3]\n ])\n ==\n\n The following options are processed:\n\n * null(+NullTerm)\n Term used to represent JSON =null=. Default @(null)\n * true(+TrueTerm)\n Term used to represent JSON =true=. Default @(true)\n * false(+FalseTerm)\n Term used to represent JSON =false=. Default @(false)\n * value_string_as(+Type)\n Prolog type used for strings used as value. Default\n is =atom=. The alternative is =string=, producing a\n packed string object. Please note that =codes= or\n =chars= would produce ambiguous output and is therefore\n not supported.\n\n If json_read/3 encounters end-of-file before any real data it\n binds Term to the term @(end_of_file).\n\n @see json_read_dict/3 to read a JSON term using the version 7\n extended data types.", "prefix":"json_read" }, "http/json:json_read_dict/2": { "body": ["json_read_dict(${1:Stream}, ${2:Dict})$3\n$0" ], "description":" json_read_dict(+Stream, -Dict) is det.\n json_read_dict(+Stream, -Dict, +Options) is det.\n\n Read a JSON object, returning objects as a dicts. The\n representation depends on the options, where the default is:\n\n * String values are mapped to Prolog strings\n * JSON =true=, =false= and =null= are represented using these\n Prolog atoms.\n * JSON objects are mapped to dicts.\n * By default, a =type= field in an object assigns a tag for\n the dict.\n\n The predicate json_read_dict/3 processes the same options as\n json_read/3, but with different defaults. In addition, it\n processes the `tag` option. See json_read/3 for details about\n the shared options.\n\n * tag(+Name)\n When converting to/from a dict, map the indicated JSON\n attribute to the dict _tag_. No mapping is performed if Name\n is the empty atom ('', default). See json_read_dict/2 and\n json_write_dict/2.\n * null(+NullTerm)\n Default the atom `null`.\n * true(+TrueTerm)\n Default the atom `true`.\n * false(+FalseTerm)\n Default the atom `false`\n * value_string_as(+Type)\n Type defaults to `string`, producing a packed string object.", "prefix":"json_read_dict" }, "http/json:json_read_dict/3": { "body": ["json_read_dict(${1:Stream}, ${2:Dict}, ${3:Options})$4\n$0" ], "description":" json_read_dict(+Stream, -Dict) is det.\n json_read_dict(+Stream, -Dict, +Options) is det.\n\n Read a JSON object, returning objects as a dicts. The\n representation depends on the options, where the default is:\n\n * String values are mapped to Prolog strings\n * JSON =true=, =false= and =null= are represented using these\n Prolog atoms.\n * JSON objects are mapped to dicts.\n * By default, a =type= field in an object assigns a tag for\n the dict.\n\n The predicate json_read_dict/3 processes the same options as\n json_read/3, but with different defaults. In addition, it\n processes the `tag` option. See json_read/3 for details about\n the shared options.\n\n * tag(+Name)\n When converting to/from a dict, map the indicated JSON\n attribute to the dict _tag_. No mapping is performed if Name\n is the empty atom ('', default). See json_read_dict/2 and\n json_write_dict/2.\n * null(+NullTerm)\n Default the atom `null`.\n * true(+TrueTerm)\n Default the atom `true`.\n * false(+FalseTerm)\n Default the atom `false`\n * value_string_as(+Type)\n Type defaults to `string`, producing a packed string object.", "prefix":"json_read_dict" }, "http/json:json_write/2": { "body": ["json_write(${1:Stream}, ${2:Term})$3\n$0" ], "description":" json_write(+Stream, +Term) is det.\n json_write(+Stream, +Term, +Options) is det.\n\n Write a JSON term to Stream. The JSON object is of the same\n format as produced by json_read/2, though we allow for some more\n flexibility with regard to pairs in objects. All of Name=Value,\n Name-Value and Name(Value) produce the same output.\n\n Values can be of the form #(Term), which causes `Term` to be\n _stringified_ if it is not an atom or string. Stringification is\n based on term_string/2.\n\n The version 7 _dict_ type is supported as well. If the dicts has\n a _tag_, a property \"type\":\"tag\" is added to the object. This\n behaviour can be changed using the =tag= option (see below). For\n example:\n\n ==\n ?- json_write(current_output, point{x:1,y:2}).\n {\n \"type\":\"point\",\n \"x\":1,\n \"y\":2\n }\n ==\n\n In addition to the options recognised by json_read/3, we process\n the following options are recognised:\n\n * width(+Width)\n Width in which we try to format the result. Too long lines\n switch from _horizontal_ to _vertical_ layout for better\n readability. If performance is critical and human\n readability is not an issue use Width = 0, which causes a\n single-line output.\n\n * step(+Step)\n Indentation increnment for next level. Default is 2.\n\n * tab(+TabDistance)\n Distance between tab-stops. If equal to Step, layout\n is generated with one tab per level.\n\n * serialize_unknown(+Boolean)\n If =true= (default =false=), serialize unknown terms and\n print them as a JSON string. The default raises a type\n error. Note that this option only makes sense if you can\n guarantee that the passed value is not an otherwise valid\n Prolog reporesentation of a Prolog term.\n\n If a string is emitted, the sequence =|<\/|= is emitted as\n =|<\\/|=. This is valid JSON syntax which ensures that JSON\n objects can be safely embedded into an HTML =|