Jump to content

User:JVbot

From Wikisource

This user account is a bot operated by Jayvdb (talk), and approved by the English Wikisource community.

Tasks

[edit]

Patrolling

[edit]

The bot automatically patrols pages listed on the Whitelist, which may be modified by anyone at present. See the bot patrol log.

Each line should start with a username, followed a list of entries which may be any of the following:

  • [[Pagename]] or [[Special:Prefixindex/<Pagename>]] : these two are functionally equivalent, as they permit the page or subpages to be created and modified. The latter is easier on the eyes if the former would be a red link.
  • [[Author:<name>]] : any page listed on the author page

Uploads

[edit]

WFB Flags

[edit]

Done This task is presumed complete but is waiting for the backlog at Category:Speedy deletion requests to be cleared.

Stage 1

[edit]

As part of an audit of images on Wikisource, all of the CIA World Fact Book flags are now replaceable with higher resolution images uploaded on the commons using the naming convention [[Image:Flag of xyz (WFB 2004)]] with the notable exceptions of three flags which used by the WFB which are used only for islands of a sovereign country and for some unexplicable reason are different dimensions to the flag of that country: Image:Flag of New Zealand (islands) (WFB 2004).gif, Image:Flag of Australia (islands) (WFB 2004).gif and Image:Flag of Norway (islands) (WFB 2004).gif.

The exact replacements that will occur can be found and improved on /WFB Flags. The method used is described in /WFB Flags/Method.

Script used Replace.py
Status complete

Stage 2

[edit]

After replacing the flag images, the old flags need to be tagged for deletion. In case any flags were not replaced, the bot will be tagging only png files that appear on Special:Unusedimages.

Command
python unusedfiles.py -ext:png \
 -tag:'sdelete|A1: all use of this image has been replaced with a higher res image now on commons'
Sample output
Getting 60 pages from wikisource:en...

Image:WFB Flag of Afghanistan.png
+ 
+ {{sdelete|A1: all use of this image has been replaced with a higher res image now on commons}}

Do you want to save the changes? (Y/N) 
Script used modified unusedfiles.py with unusedfiles fix applied and submitted to sf.net[1]
Status complete

EB1911

[edit]

This task is to handle the problems outlined at Wikisource:Bot requests#1911 Encyclopædia Britannica maintenance one-use bot.

Once complete, the task specific changes to 1911 Encyclopædia Britannica/Header will be removed.

Problem 2 & 3

[edit]

To fix these two, the header template will be changed to detect params "article", "nonotes" and any others that are not longer used, and place the articles into a category Category:EB1911 subpages needing header changes (a subcat of Category:Wikisource maintenance). The bot will then run through pages in that category and update the header as follows.

  1. param "article" will be removed
  2. param "nonotes" will be changed to wikipedia="none" if it doesnt already exist

Pages in the category after the bot has completed will need to be fixed manually.

Script used Replace.py
Status pending

Problem 5

[edit]

There are a lot of EB1911 subpages with that have filled the 1911 Encyclopædia Britannica/Header template with a wikipedia param of [[w:page|page]] which is unnecessary as that param expects just "page". The result was not pretty. A recent change to the EB1911 header has catered for this, hiding the mess, but it would be preferable to clean up the values in this param so that this kludge isnt needed.

To do this, the header will be modified to also categorise all subpages with the incorrect param value into Category:EB1911 subpages with incorrect wikipedia value (a subcat of Category:Wikisource maintenance). The bot will then run through pages in that category and fix the param value.

Command
python replace.py -summary:'fix wikipedia param' -family:wikisource \
                  -cat:EB1911_subpages_with_incorrect_wikipedia_value -regex \
    'wikipedia ?= ?\[\[[wW](ikipedia)?:([^\|]*)\|([^\]]*)\]\] and \[\[[wW](ikipedia)?:([^\|]*)\|([^\]]*)\]\]' \
    'wikipedia  = \2 | wikipedia2 = \5' \
    'wikipedia ?= ?\[\[[wW](ikipedia)?:([^\|]*)\|([^\]]*)\]\]' \
    'wikipedia = \2'
Script used Replace.py
Status complete

Problem 6

[edit]

Currently, pages that are three levels deep are using {{header}} or {{header2}}. Changes have been made to {{EB1911}} to better handle pages that are three levels deep, so now a bot needs to convert to using the EB1911 capabilities. User:Psychless/Temp holds a list of EB1911 pages without a EB1911 header.

python replace.py -links:User:Psychless/Temp -regex '(?ms){{(h|H)eader.*title
*= *\[\[\.\.\/\|[^/]*\/([^]]*).*previous = *\[\[\.\.\/([^|]*)\|.*next *= \[\[\.\.\/([^|]*)\|[^}]*}}' '{{EB1911 | previous = \2/\3 | next = \2/\4 | wikipedia = "none"  }}'
Script used Replace.py
Status in progrss

Media

[edit]

Categorise media

[edit]

Add media into Category:PDF files and Category:OGG files, etc.

Script used Replace.py
Status in progress

Prepare for move to commons

[edit]

Tag media with PD licenses, commons categories and tag with {{commons ok}}

Script used Replace.py
Status in progress

Headers

[edit]

Dead end pages

[edit]

We have nearly 2000 pages that are Special:Deadendpages, and as a result do not turn up in Special:Statistics. These need a header, or they need an author page.

 python replace.py -deadendpages -excepttext:'{{([h|H]eader|no header)' -regex '(?ms)^(.*)$' "{{no header}}
 \1"
Script used Replace.py
Status in progress

header conversion

[edit]

A script to convert header into header2:

 python replace.py -namespace:0 -summary:'[bot] [[WS:STYLE|standardisation]]: replacing header with header2' \
                   -cat:'Pages with arrow in previous param' -regex \
                   '{{[H|h]eader([^}]*override_author[^=]*=[^|]*[A-Za-z][^}]*}})' \
                       '{{subst:header-layout-override\1' \
                   '{{[H|h]eader([^}]*)}}' \
                       '{{subst:header-layout | author = | \1}}' \
                   'â '' 'â' '' \
                   'section *= *\(([^)]*)\)' 'section = \1' \
                   'section *= *<br *\/?>(.*)' 'section = \1' \
                   '(section[^=]*=[^<}]*)<br *\/?>' '\1: '
Script used Replace.py
Status in progress

JCMatoeam

[edit]

Remove deprecated templates {{JCMatoeamV1}} and {{JCMatoeamV1}}, and tag the empty pages with {{OCR}}

 python replace.py -transcludes:JCMatoeamV1 -regex '<noinclude>{{JCMatoeamV1[^}]*}}<\/noinclude>' '' '{{JCMatoeamV1[^}]*}}' ''
 python replace.py -transcludes:JCMatoeamV2 -regex '<noinclude>{{JCMatoeamV2[^}]*}}<\/noinclude>' '' '{{JCMatoeamV1[^}]*}}' ''
Script used Replace.py
Status completed

History of Iowa

[edit]

Move Pages in History of Iowa From the Earliest Times to the Beginning of the Twentieth Century/4 into Page: namespace.

 python movepages.py -prefixindex:"History of Iowa From the Earliest Times to the Beginning of the Twentieth Century/4/" -prefix:Page:
Script used movepages.py
Status done

Remove {{header}} from Pages in History of Iowa From the Earliest Times to the Beginning of the Twentieth Century/4 into Page: namespace.

 python replace.py -prefixindex:"Page:History of Iowa From the Earliest Times to the Beginning of the Twentieth Century/4/" \
      -regex '(?ms){{(h|H)eader[^}]*}}' ''
Script used replace.py
Status done

Replace the redirects with {{dated soft redirect}} once Psychless (talkcontribs) is happy with the result of the last two stages.

 python replace.py -prefixindex:"History of Iowa From the Earliest Times to the Beginning of the Twentieth Century/4/" \
      -regex ???
Script used replace.py
Status on hold

Easton's page name cleanup

[edit]

There are a number of pages in Special:Prefixindex/Easton's Bible DIctionary (note the wrong capitalisation of DIc). The redirects need to be replaced with {{dated soft redirect}}s.

 python replace.py -regex -prefixindex:"Easton's Bible DIctionary" \
       '#REDIRECT \[\[(.*)\]\]' \
       '{{subst:dated soft redirect|"[[\1]]"}}'
Script used replace.py
Status in progress

A Course In Miracles

[edit]

Page move requested at [2].

 python movepages.py -file:pagelist.txt " In " " in " " For " " for "
Script used A hacked movepages.py
Status in progress