Wikisource:Scriptorium/Archives/2014-03
Please do not post any new comments on this page.
This is a discussion archive first created in , although the comments contained were likely posted before and after this date. See current discussion or the archives index. |
Announcements
Wikidata ↔ Wikisource phase 2 proposed
Wikidata is proposing that phase 2 of the transition be scheduled for Feburary 25. This phase allows for us to pull data from Wikidata, eg. have {{authority control}} data provided through addition of the template, without data as it is pulled from WD. They have started that discussion at d:Wikidata:Wikisource — billinghurst sDrewth 15:43, 27 January 2014 (UTC)
- See discussion #Access to data in Wikidata
<poem> tag to become <lines>
“ | For some time, work has been ongoing on a merge of the Poem extension into MediaWiki core [1] [2]. (For those not aware, this extension [3] implements a simple <poem> tag which, among other things, preserves newlines.)
Several developers have expressed the desire for an alternative name for this tag, alongside <poem> (which of course will be kept for backward compatibility). This is because the tag is sometimes used for various other uses besides poetry. There were many suggestions (see the bug report [1]), but it was eventually agreed to use Michael M.'s suggestion of <lines>. This name puts the focus on the individual "lines" of the content, which is exactly what the tag is doing. We almost had a collision with a previous proposal (<verbatim> conflicted with a tag in use on Wikia), so we wish to ensure that no-one else is using <lines>. No-one is yet aware of any MediaWiki extensions or other code using a tag named <lines> in wikitext. If you think the name <lines> will be an issue, or if you have any other concerns with this merge, please speak up, either here or at the bug report. |
” |
What it would mean for Wikisource is that with the proposed change being backwards compatible, that we can make updates in our own time. — billinghurst sDrewth 10:31, 8 February 2014 (UTC)
- I should clarify that <lines> is expected to work identically to <poem>, so there should be no need to change anything (although some users may find that they need to alter their individual custom CSS styling). Indeed, there is a proposal in the works to provide automatic line numbering for <poem> tags, which I think is a positive development for Wikisource. This, that and the other (talk) 00:08, 11 February 2014 (UTC)
- I'm not sure automatic line numbering would be good for WS. Nearly every poem found in novels and other texts is marked using the <poem> tag and they are not usually numbered. Any change in this would essentially break the format of many of the books here. Plus, I think the current numbering system, via template, is a bit more flexible (in terms of where the numbering goes). The Haz talk 00:18, 11 February 2014 (UTC)
- Edit. I read automatic as default. Of course, adding the line numbering option to the poem tag would be great, as long as it's not the default setting. The Haz talk 00:21, 11 February 2014 (UTC)
Proposals
Facilitating interwiki bots for Wikidata migration
As has been previously annunciated, Wikidata is soon (14 Jan) to start to migrate xxWS data to their system to manage interlanguage links. They will have bots in place that undertake this matter. Our existing bot policy does not require permission to operate bots for interlanguage links, though there is the requirement to get a bot flag through application to the wiki. As the bots that will be operating are the same operators that undertook the xxWP migrations, I would like for the community to be able to expedite the bot flag approval process, rather than our usual more languid approach. I would like for us to be able to have testing take place, and the authority provided to the existing bureaucrats, (@Hesperian: @BirgitteSB: @Zhaladshar: or if unavailable, by approved delegation to stewards) for the assignation of rights to be assessed and allocated. — billinghurst sDrewth 04:28, 3 January 2014 (UTC)
- If I understand you right, you're asking the community to authorize 'crats to bypass community discussion and unilaterally flag bots — just for this one-off uncontroversial situation. I am comfortable with that, bearing in mind that 'crats can also unflag. I would envisage it working just like speedy deletion and the "bold-revert-discuss" cycle: I'm happy to be bold and flag these bots unilaterally, but should anyone challenge any decision to flag, I will unflag and take the discussion to the community. Hesperian 04:53, 3 January 2014 (UTC)
- I'm OK with this as a one-off situation with the following 2 provisos: 1) the bot(s) have a successful test run; and 2) at the conclusion of this data-migration process they are deflagged. If the bot-operator wishes to have a longer term presence then an "application" through the normal process needs to be made (in the same way as our policy on temporary admin access operates). Beeswaxcandle (talk) 05:16, 3 January 2014 (UTC)
- @Beeswaxcandle: The bots would be a permanent feature/fixture constantly working in relation to people adding/deleting/updating the interwiki data at Wikidata. Can you explain why you see that the bots should be operating as unflagged bots is an advantage. Would it not be possible that we, as a community, could review the bots operations as part of our annual bot review? — billinghurst sDrewth 15:45, 4 January 2014 (UTC)
- Sorry, I misunderstood intention here. In my RL data world migration happens once and then the old system is decommissioned and only the new system is used, therefore any automated process of migration is temporary. I assumed that this WD process would be similarly temporary. I don't want unflagged bots showing up in RC. Beeswaxcandle (talk) 17:38, 4 January 2014 (UTC)
- Migration does happen once, it is the ongoing process/management, predominantly new works, either here or at another wiki. — billinghurst sDrewth 14:10, 5 January 2014 (UTC)
- Sorry, I misunderstood intention here. In my RL data world migration happens once and then the old system is decommissioned and only the new system is used, therefore any automated process of migration is temporary. I assumed that this WD process would be similarly temporary. I don't want unflagged bots showing up in RC. Beeswaxcandle (talk) 17:38, 4 January 2014 (UTC)
- @Beeswaxcandle: The bots would be a permanent feature/fixture constantly working in relation to people adding/deleting/updating the interwiki data at Wikidata. Can you explain why you see that the bots should be operating as unflagged bots is an advantage. Would it not be possible that we, as a community, could review the bots operations as part of our annual bot review? — billinghurst sDrewth 15:45, 4 January 2014 (UTC)
- I have yet to see any specific examples of what is going to be migrated or what will happen to the data, so I still have no clue whether this is a worthwhile endeavour. All discussions thus far have been largely abstract. Having seen the catastrophe that happened to interwiki linking on Wikipedia, I'm not in favor of having this process outsourced. --EncycloPetey (talk) 05:36, 3 January 2014 (UTC)
- The plan is at the previously announced d:Wikidata:Wikisource. In the primary phase, Author: ns interwikis, (eg. Author:William Shakespeare would have interwiki to be moved to d:q692 into a Wikisource section), in main ns eg. Pride and Prejudice). My understanding is that first test would be a dry run of the import of the links to test WD, and no modifications back [and interations until happy and no bot bits required]. When we get to the live WD, I would be hoping that we can do a series of smaller test iterations of the import, and the link modification here, and hopefully sign off bot flag.Re any errors, probably the advantage of not coming first. The Voy: links seem to have flowed without issue; and let me say that with Commons still being done manually, I don't want to manually do enWS. — billinghurst sDrewth 16:02, 4 January 2014 (UTC)
- Oh, I've seen and read "the plan", but most of the details are still very vague. "The plan" at this point consists mostly of descriptions of kinds of pages and namespaces on Wikisource, rather than of actual examples of data items on Wikidata to show us what will happen. So I repeat: I have yet to see any specific examples of what is going to be migrated or what will happen to the data. --EncycloPetey (talk) 04:50, 8 January 2014 (UTC)
- It is just the interwiki links being migrated at this stage; they have not set a date for any further integration, only that it will follow at some point in the future. To follow Billinghurst's example, Author:William Shakespeare is currently linked to 19 other Wikisources, such as fr:Auteur:William Shakespeare. Following migration, they will be listed under "Wikisource pages linked to this item" on Wikidata:Q692 and deleted from the bottom of the Wikisource page. I do not believe this is limited to the Author namespace (which is what I had originally heard) but the principle will apply to all pages. New Wikidata entries will be created for pages that do not currently exist.
- If there are any problems, I suspect it will either be with subpages (sometimes these are all linked to the same page elsewhere, which won't work anymore) or possibly with potentially confusing works like The Bible, where we have multiple versions of the same thing—and so do other Wikisources (there are no interwiki problems that I know of at the moment but this is where things could go wrong during migration). Another possible problem is if an author, for example, is not curretly linked to a biography on Wikipedia and this is not caught during the migration. We may end up with two separate Wikidata entries for the same person. That will just need to be merged, however. - AdamBMorgan (talk) 11:03, 8 January 2014 (UTC)
- Another potential error-situation: Two versions on one Wikisource linking to one version on another Wikisource (or something similar). I think Wikidata only handle single, direct 1-to-1 links. The first example I found: Beowulf (a versions page) and Beowulf (Harrison and Sharp) both link to fr:Beowulf (a versions page); fr:Beowulf/Botkin links to Beowulf (the versions page again) but it does not link back. I could fix those but I don't expect them to be the only cases. In a variation on this: We have two versions of On the Origin of Species, both with unique sets of interwiki links but a lot of the targets appear to be undated generic versions rather than our own specific versions, while the German work is an 1876 version linked to our 1873 sixth edition. - AdamBMorgan (talk) 22:24, 8 January 2014 (UTC)
- The problem goes further and deeper than that. What happens in a situation where we have more than one English translation of any work originally written in another language? Where do the interwikis go? No one has yet answered this question, but we're going to start migrating interwiki links? Where exactly are they being migrated to since this question hasn't been answered? The interwiki links are supposed to be for the exact same work/edition, so that data about that particular edition can be housed at Wikidata, but a translation is never the same edition as the original, so where will the interwiki links go? That is, will Wikidata only have data about works, data about particular editions, particular what?? This issue needs to be resolved if data is going to be migrated. --EncycloPetey (talk) 20:40, 11 January 2014 (UTC)
- This page might help in clarifying or at least questions can be asked there, D:Wikidata:Books_task_force#Edition_item_properties.--Mpaa (talk) 22:08, 11 January 2014 (UTC)
- Nope. That doesn't address the issues at all; it's merely a list of possible labels. --EncycloPetey (talk) 05:32, 12 January 2014 (UTC)
- I don't know if it will work but the plan they seem to be going for is a "work" entry on Wikidata for the general piece, say The Bible or The Odyssey, which will link to a disambiguation page in most cases (although many Wikisources, and even some works here, have a generic piece instead of a specific edition). There will then be a separate "edition" entry on Wikidata for each specific edition of a work, such as the Cowper translation or the Butler translation of The Odyssey. The edition should link up to the work through a property related to "instance of" (probably "edition of"). There will need to be some corrections but that's very true now. Just looking at The Odyssey, there are interwikis randomly sprinkled over those pages and both the disambiguation page and the Butler translation link to the same pages (which won't be possible anymore). This will be easier to manage when everything is controlled centrally and we don't need to make the same edit on half a dozen different Wikisources. I suspect most interwikis will end up at the work/disambiguation level, especially for other Wikisources' works. - AdamBMorgan (talk) 10:12, 14 January 2014 (UTC)
- The Cowper translation of the Odyssey is not an "edition", it's a "text". That "text" may, and presumably has, been published many times, and each separate publication is an "edition". For an example of the importance of distinguishing works from editions from texts, see A Light Man. Hesperian 10:58, 14 January 2014 (UTC)
- I'm sorry, I don't know what this means. Specifically, I don't know what "text" is supposed to mean in this context, nor how that differs from edition (A Light Man did not help). Strictly, the Cowper translation is an edition, a text and a work; as well as probably several other nouns. The people trying to set up a scheme on Wikidata chose the words "work" and "edition" to define their hierarchy, which is how I am using them here. They could have, for example, chosen "high-level"/"parent" and "low-level"/"child" instead. It's just an attempt to establish a simple terminology for the concept. We shouldn't cross this with other terminology from different contexts or it will just cause unnecessary confusion. - AdamBMorgan (talk) 11:38, 14 January 2014 (UTC)
- It means (in practical terms) that the data structure being established on WikiData for our publication data will be woefully inadequate. They will have no means of distinguishing different editions of the Cowper translation in their database. We therefore will not be able to use WikiData to house information about editions, texts, and translations, because (a) the data structure will be inadequate, and (b) they will have a confusing internal jargon that differs from local usage, which (c) will impede communication in resolving these issues. --EncycloPetey (talk) 05:01, 15 January 2014 (UTC)
- The Cowper translation of the Odyssey may be found in The Iliad and the Odyssey (1791) and also in The Works of William Cowper (1836). Each of these distinct publications are editions; the Cowper translation itself is not. I really think the concept of an "edition" must be reserved for works as distinctly and uniquely published, not for abstractions like "the Cowper translation" which apply to multiple publications or even multiple revisions.What we need is a generic "is a derivative work of" relation, to handle the situation where: an old German fairy tale is written down by various authors. One of those stories goes through various revisions over the life of the author. One of those revisions is translated into English multiple times. Once of those translations is revised multiple times over the life of the translator. One of those revisions is abridged, and that abridgement is published multiple times. We probably can't expect to capture the fact that a publication is a publication of an abridgement of a revision of a translation of a revision of a version of a work. But we do need to capture these dependencies somehow. If everything is declared to be either "work" or "edition" (but not both), how are we to handle the above? Declare all those editions to be editions of the original fairy tale, and lose all that semantic detail? That would be terrible. Hesperian 05:22, 15 January 2014 (UTC)
- Wikidata can support either, they just get given a Q# number and properites can be added. The terminology looks like it was developed by Aubrey or Micru, neither of whom have English as their first language and were just trying to explain a concept anyway. Perhaps a more generic "Object" and "Instance" would work better? The Odyssey is the object, while all editions, versions, texts and/or translations are instances of that object. As Wikidata is set up, there should be an item (each with it's own Q number) for the object and each instance thereof. The concept of edition, version, text or translation is not something a database would recognise; all would just have the Wikidata property "instance of" and the Q number of the main "The Odyssey" (Q35160, for the record). Part of the problem is the inclusion of older, non-scan based texts on Wikisource, which are not based on sepcific physical editions (as is the case with more recent texts), but Wikidata should still be able to handle that (mostly because it is just based on the "instance of" property). - AdamBMorgan (talk) 13:15, 15 January 2014 (UTC)
- To illustrate the above, I manually migrated The Odyssey (with versions) and Treasure Island (with versions). The latter is actually a better example as we have two distinct editions here. (I kept the non-English links at the disambiguation/parent/object level, which isn't perfect but they can be corrected later.) - AdamBMorgan (talk) 21:17, 15 January 2014 (UTC)
- It would appear this particular book is also a mid-90s German Eurohouse act: a pertinent illustration of the dangers of automatic migration. That aside, thankyou for the examples. Hesperian 00:07, 16 January 2014 (UTC)
- Hesperian: your previous comment was quoted in this discussion where we are also debating the same topic. I invite all of you people to come there and join the discussion! :) Candalua (talk) 18:25, 17 January 2014 (UTC)
- Sorry to be this late, but I didn't see the mention. @AdamBMorgan:: you're right the terminology isn't perfect, but here I can do a quick recap of what we thought:
- We used the w:Functional_Requirements_for_Bibliographic_Records model (a very used and famous conceptual framework in library science) to underline the fact that you can view a book in 4 different ways, or levels. As a work, as a expression, a manifestation or an item. We don't really need all 4 of them, in Wikimedia projects: when Wikipedia talks about a book, it is often in the "work" view (the Bible, Pinocchio, Hamlet as concept and work which has been declined in several translation and editions and different media). On Wikisource, we use a particular edition (sometimes, we have more than one) of that book. We have different Wikisources, so we can have translations. Not to complicate too much, we didn't use the technical terms "expression" or "manifestation", as the boundary is a bit blurred or at leawst not easy to grasp. So we used "edition" instead, collapsing those 2 FRBR layers in 1 (for what is worth, other conceptual frameworks similar to FRBR (like Bibframe) collapse those 2 layers too). Thus the duality work - edition. I hope this helps. --Aubrey (talk) 09:49, 30 January 2014 (UTC)
- Hesperian: your previous comment was quoted in this discussion where we are also debating the same topic. I invite all of you people to come there and join the discussion! :) Candalua (talk) 18:25, 17 January 2014 (UTC)
- It would appear this particular book is also a mid-90s German Eurohouse act: a pertinent illustration of the dangers of automatic migration. That aside, thankyou for the examples. Hesperian 00:07, 16 January 2014 (UTC)
- To illustrate the above, I manually migrated The Odyssey (with versions) and Treasure Island (with versions). The latter is actually a better example as we have two distinct editions here. (I kept the non-English links at the disambiguation/parent/object level, which isn't perfect but they can be corrected later.) - AdamBMorgan (talk) 21:17, 15 January 2014 (UTC)
- Wikidata can support either, they just get given a Q# number and properites can be added. The terminology looks like it was developed by Aubrey or Micru, neither of whom have English as their first language and were just trying to explain a concept anyway. Perhaps a more generic "Object" and "Instance" would work better? The Odyssey is the object, while all editions, versions, texts and/or translations are instances of that object. As Wikidata is set up, there should be an item (each with it's own Q number) for the object and each instance thereof. The concept of edition, version, text or translation is not something a database would recognise; all would just have the Wikidata property "instance of" and the Q number of the main "The Odyssey" (Q35160, for the record). Part of the problem is the inclusion of older, non-scan based texts on Wikisource, which are not based on sepcific physical editions (as is the case with more recent texts), but Wikidata should still be able to handle that (mostly because it is just based on the "instance of" property). - AdamBMorgan (talk) 13:15, 15 January 2014 (UTC)
- The Cowper translation of the Odyssey may be found in The Iliad and the Odyssey (1791) and also in The Works of William Cowper (1836). Each of these distinct publications are editions; the Cowper translation itself is not. I really think the concept of an "edition" must be reserved for works as distinctly and uniquely published, not for abstractions like "the Cowper translation" which apply to multiple publications or even multiple revisions.What we need is a generic "is a derivative work of" relation, to handle the situation where: an old German fairy tale is written down by various authors. One of those stories goes through various revisions over the life of the author. One of those revisions is translated into English multiple times. Once of those translations is revised multiple times over the life of the translator. One of those revisions is abridged, and that abridgement is published multiple times. We probably can't expect to capture the fact that a publication is a publication of an abridgement of a revision of a translation of a revision of a version of a work. But we do need to capture these dependencies somehow. If everything is declared to be either "work" or "edition" (but not both), how are we to handle the above? Declare all those editions to be editions of the original fairy tale, and lose all that semantic detail? That would be terrible. Hesperian 05:22, 15 January 2014 (UTC)
- It means (in practical terms) that the data structure being established on WikiData for our publication data will be woefully inadequate. They will have no means of distinguishing different editions of the Cowper translation in their database. We therefore will not be able to use WikiData to house information about editions, texts, and translations, because (a) the data structure will be inadequate, and (b) they will have a confusing internal jargon that differs from local usage, which (c) will impede communication in resolving these issues. --EncycloPetey (talk) 05:01, 15 January 2014 (UTC)
- I'm sorry, I don't know what this means. Specifically, I don't know what "text" is supposed to mean in this context, nor how that differs from edition (A Light Man did not help). Strictly, the Cowper translation is an edition, a text and a work; as well as probably several other nouns. The people trying to set up a scheme on Wikidata chose the words "work" and "edition" to define their hierarchy, which is how I am using them here. They could have, for example, chosen "high-level"/"parent" and "low-level"/"child" instead. It's just an attempt to establish a simple terminology for the concept. We shouldn't cross this with other terminology from different contexts or it will just cause unnecessary confusion. - AdamBMorgan (talk) 11:38, 14 January 2014 (UTC)
- The Cowper translation of the Odyssey is not an "edition", it's a "text". That "text" may, and presumably has, been published many times, and each separate publication is an "edition". For an example of the importance of distinguishing works from editions from texts, see A Light Man. Hesperian 10:58, 14 January 2014 (UTC)
- I don't know if it will work but the plan they seem to be going for is a "work" entry on Wikidata for the general piece, say The Bible or The Odyssey, which will link to a disambiguation page in most cases (although many Wikisources, and even some works here, have a generic piece instead of a specific edition). There will then be a separate "edition" entry on Wikidata for each specific edition of a work, such as the Cowper translation or the Butler translation of The Odyssey. The edition should link up to the work through a property related to "instance of" (probably "edition of"). There will need to be some corrections but that's very true now. Just looking at The Odyssey, there are interwikis randomly sprinkled over those pages and both the disambiguation page and the Butler translation link to the same pages (which won't be possible anymore). This will be easier to manage when everything is controlled centrally and we don't need to make the same edit on half a dozen different Wikisources. I suspect most interwikis will end up at the work/disambiguation level, especially for other Wikisources' works. - AdamBMorgan (talk) 10:12, 14 January 2014 (UTC)
- Nope. That doesn't address the issues at all; it's merely a list of possible labels. --EncycloPetey (talk) 05:32, 12 January 2014 (UTC)
- This page might help in clarifying or at least questions can be asked there, D:Wikidata:Books_task_force#Edition_item_properties.--Mpaa (talk) 22:08, 11 January 2014 (UTC)
- The problem goes further and deeper than that. What happens in a situation where we have more than one English translation of any work originally written in another language? Where do the interwikis go? No one has yet answered this question, but we're going to start migrating interwiki links? Where exactly are they being migrated to since this question hasn't been answered? The interwiki links are supposed to be for the exact same work/edition, so that data about that particular edition can be housed at Wikidata, but a translation is never the same edition as the original, so where will the interwiki links go? That is, will Wikidata only have data about works, data about particular editions, particular what?? This issue needs to be resolved if data is going to be migrated. --EncycloPetey (talk) 20:40, 11 January 2014 (UTC)
- Another potential error-situation: Two versions on one Wikisource linking to one version on another Wikisource (or something similar). I think Wikidata only handle single, direct 1-to-1 links. The first example I found: Beowulf (a versions page) and Beowulf (Harrison and Sharp) both link to fr:Beowulf (a versions page); fr:Beowulf/Botkin links to Beowulf (the versions page again) but it does not link back. I could fix those but I don't expect them to be the only cases. In a variation on this: We have two versions of On the Origin of Species, both with unique sets of interwiki links but a lot of the targets appear to be undated generic versions rather than our own specific versions, while the German work is an 1876 version linked to our 1873 sixth edition. - AdamBMorgan (talk) 22:24, 8 January 2014 (UTC)
- Oh, I've seen and read "the plan", but most of the details are still very vague. "The plan" at this point consists mostly of descriptions of kinds of pages and namespaces on Wikisource, rather than of actual examples of data items on Wikidata to show us what will happen. So I repeat: I have yet to see any specific examples of what is going to be migrated or what will happen to the data. --EncycloPetey (talk) 04:50, 8 January 2014 (UTC)
- The plan is at the previously announced d:Wikidata:Wikisource. In the primary phase, Author: ns interwikis, (eg. Author:William Shakespeare would have interwiki to be moved to d:q692 into a Wikisource section), in main ns eg. Pride and Prejudice). My understanding is that first test would be a dry run of the import of the links to test WD, and no modifications back [and interations until happy and no bot bits required]. When we get to the live WD, I would be hoping that we can do a series of smaller test iterations of the import, and the link modification here, and hopefully sign off bot flag.Re any errors, probably the advantage of not coming first. The Voy: links seem to have flowed without issue; and let me say that with Commons still being done manually, I don't want to manually do enWS. — billinghurst sDrewth 16:02, 4 January 2014 (UTC)
- I'm OK with this as a one-off situation with the following 2 provisos: 1) the bot(s) have a successful test run; and 2) at the conclusion of this data-migration process they are deflagged. If the bot-operator wishes to have a longer term presence then an "application" through the normal process needs to be made (in the same way as our policy on temporary admin access operates). Beeswaxcandle (talk) 05:16, 3 January 2014 (UTC)
Support Fast track approval for bot in this case. JeepdaySock (AKA, Jeepday) 11:34, 3 January 2014 (UTC)Given the questions raised, retracting support fast track approval. Keep in mind that the only thing a bot flag does is impacts recent change visibility. JeepdaySock (AKA, Jeepday) 15:31, 8 January 2014 (UTC)- Support--Mpaa (talk) 16:56, 4 January 2014 (UTC)
- Support - AdamBMorgan (talk) 22:27, 8 January 2014 (UTC). Regardless of the potential from problems mentioned above, I would still prefer the Wikidata-related bot(s) to have bot flags and I think this one-off variance from normal policy is appropriate. Lacking a bot flag won't do much other than clutter up watchlists without the option of filtering them.
- Oppose — until (or unless?) Preferences / Recent changes / Advanced options the checkbox to toggle Show Wikidata edits by default in recent changes and watchlist... on or off is provided to all before any bot runs even for testing purposes begin. I'd prefer the bug to allow this toggle to work with "enhanced changes" be resolved prior to authorizing any runs as well but that may be an "unfair" bridge to far request just on my part (I kind of rely on the advanced/enhanced views for my watchlist and/or recent changes list personally but maybe the majority of other folks do not). To see what I "mean here", visit the same in your Wikipedia User preferences. -- George Orwell III (talk) 00:37, 9 January 2014 (UTC)
- fwiw... showing wikidata edits is now an option in User preferences. -- George Orwell III (talk) 20:38, 15 January 2014 (UTC)
- Opppose implementing a plan that hasn't been thought through yet. --EncycloPetey (talk) 20:40, 11 January 2014 (UTC)
- As I've said before, when the interwiki migration for Wikipedia happened, the result was a shambles that is becoming a catastrophe. The assumption was that it was simply a matter of shifting the links storage location and consolidation of differences of linking. This was far from what actually happened. Now, all the interwikis are controlled at Wikidata, and changes to interwikis are managed, patrolled, and made without anyone at Wikipedia checking them. How could they, when the site houses so much data, and many of the edits are cryptic changes to data elements? You have to go to Wikidata to see what changes have been made; it can't be monitored locally.
- Further, in my own specialist field (botany), an editor banned from Wikipedia and Wiktionary (both the English and Dutch projects), now makes thousands of changes to the interwiki linking for these articles. There was so much drama in trying to get that user banned from Wikipedia, that although everyone there in the specialist group deplores what is happening, no one wants to go through the drama all over again. What has happened is that Wikipedia has outsourced all maintenance of its links to another site, where a completely different group of people has taken charge. Some of these people don't understand the issues, and some were banned from Wikipedia for doing what they're now doing at Wikidata. Now they want to repeat that process here, and I think it's a terrible idea. --EncycloPetey (talk) 05:45, 12 January 2014 (UTC)
- This is true of Commons as well and I don't know of any problems with their hosting of DjVus etc. - AdamBMorgan (talk) 10:23, 13 January 2014 (UTC)
- Commons is very different. All Commons does is host our files, and if a file is renamed, we get notified and anything that was broken is fixed. Commons does not have control of our cross linking. With WikiData, they would have total control of migrated data. Changes that they make at their end change our linking, and we would not be notified. Allowing them to host our links means that we would be giving over control of how our internal links are set up. This is very different from hosting DjVus at Commons. --EncycloPetey (talk) 04:06, 14 January 2014 (UTC)
- Personally, I don't think interwiki links are as important as the files on Commons. Additionally, I don't think our current interlinking is especially good itself, so Wikidata will at least not make anything worse. Interwiki links for Wikisource are unlikely to be contentious but, if there is a problem, we can always override Wikidata locally (which a bot will probably try to "fix" but that can be dealt with as well). - AdamBMorgan (talk) 10:12, 14 January 2014 (UTC)
- Re "WD will at least not make anything worse": I disagree. As I've commented before, the migration destroyed the linking at Wikipedia, and removed control and oversight to a separate project. While the current state of interwiki links here may be bad, I fully expect the three points I just enumerated to make the situation much worse. It's happened with WP, and I don't see any reason not to believe the same will not happen here. WikiData will simply create problems we haven't thought of yet, and that we never expected would happen. --EncycloPetey (talk) 04:54, 15 January 2014 (UTC)
- I can't see how interwiki links could be contentious, for Wikisource or botony. Why do we need local control and oversight? A centralised interwiki database seems to me far superior than the current system in every way. - AdamBMorgan (talk) 13:15, 15 January 2014 (UTC)
- That's exactly my point: "WikiData will simply create problems we haven't thought of yet, and that we never expected would happen." I've been through this process and have seen botany interwiki links become contentious. This was a problem they hadn't thought of, never expected would happen, and now are doing nothing to fix. --EncycloPetey (talk) 22:25, 18 January 2014 (UTC)
- I can't see how interwiki links could be contentious, for Wikisource or botony. Why do we need local control and oversight? A centralised interwiki database seems to me far superior than the current system in every way. - AdamBMorgan (talk) 13:15, 15 January 2014 (UTC)
- Re "WD will at least not make anything worse": I disagree. As I've commented before, the migration destroyed the linking at Wikipedia, and removed control and oversight to a separate project. While the current state of interwiki links here may be bad, I fully expect the three points I just enumerated to make the situation much worse. It's happened with WP, and I don't see any reason not to believe the same will not happen here. WikiData will simply create problems we haven't thought of yet, and that we never expected would happen. --EncycloPetey (talk) 04:54, 15 January 2014 (UTC)
- Personally, I don't think interwiki links are as important as the files on Commons. Additionally, I don't think our current interlinking is especially good itself, so Wikidata will at least not make anything worse. Interwiki links for Wikisource are unlikely to be contentious but, if there is a problem, we can always override Wikidata locally (which a bot will probably try to "fix" but that can be dealt with as well). - AdamBMorgan (talk) 10:12, 14 January 2014 (UTC)
- Commons is very different. All Commons does is host our files, and if a file is renamed, we get notified and anything that was broken is fixed. Commons does not have control of our cross linking. With WikiData, they would have total control of migrated data. Changes that they make at their end change our linking, and we would not be notified. Allowing them to host our links means that we would be giving over control of how our internal links are set up. This is very different from hosting DjVus at Commons. --EncycloPetey (talk) 04:06, 14 January 2014 (UTC)
- This is true of Commons as well and I don't know of any problems with their hosting of DjVus etc. - AdamBMorgan (talk) 10:23, 13 January 2014 (UTC)
- I don't feel super secure about granting bot flags to bots that are making these kinds of changes. I think we should have a limited test run with one or a few of the bots that will be migrating the interwiki links so that we can make sure that the changes that are happening are good for our community. Once we agree that we like (or don't) what is going on we can expedite the process.—Zhaladshar (Talk) 19:07, 12 January 2014 (UTC)
- Might be wrong ... but once data have been imported into WD, my understanding is that 1) the WD item will be added to the page properties (see [1]) and I guess this process will be invisible to us; 2) the bots will just remove the interwiki links from the page (see [2]).
- How useful is to observe this process vs. the concerns above? And there is always the option to "Show bots" in RecentChanges to make the edits visible. We will not know what will be linked unless we check the links on a side of a page or Wikidata, bots or non bots approval.--Mpaa (talk) 22:31, 12 January 2014 (UTC)
- I have not understood how much margin we have to influence the migration, other than refuse it maybe. Let's make them start from Authors:ns, which looks less controversial and let's discuss with WD people what to gather/how to use info in other namespaces. Until we discuss it here only, I am afraid we will not go too far.--Mpaa (talk) 22:31, 12 January 2014 (UTC)
- Phase 1 migration happens tomorrow, so I don't think we have much time left to debate its merits or change the plan. I can't see how interwiki links will be much of a problem for Wikisource. I've fixed a few in the past but I don't think they get patrolled that much anyway. - AdamBMorgan (talk) 10:23, 13 January 2014 (UTC)
OK, so it's started with DexBot making a few transfers in the deaths categories beginning with 10xx. However, User:Ladsgroup has undone 4 of them and User:CandalBot has recreated the iw links. What's the point of taking the links away with one bot only to have another put them back again? Beeswaxcandle (talk) 06:35, 15 January 2014 (UTC)
- you're getting this thing wrong. at first we need to connect pages of wikisource and wikidata (which my bot is doing it even right now as you can see here d:Special:Contributions/Dexbot. after that we need to remove existed interwiki links (because Wikidata can handle that) my bot didn't add interwiki links correctly to wikidata (it can add right now but it had bug) and it was removing interwiki links, so stopped the bot and fixed it and undo incorrect edits, interwiki bots (who adds [[fa:Something]] links) should be stopped right nowLadsgroup (talk) 06:44, 15 January 2014 (UTC)
- I think maybe "you're getting this thing wrong", Wikisource does NOT "need to connect pages of wikisource and wikidata", you want to do this. As best I can tell there is not community support for these changes. Challenges to process have been raised and have not been addressed. JeepdaySock (AKA, Jeepday) 11:42, 15 January 2014 (UTC)
- I for one really want the connection of pages between Wikisource and Wikdata and I'm disappointed that we've hit a problem at the last second. However, either way, CandalBot probably needs to be stopped as it is likely to be confused by activity on other Wikisources even if there is none here. - AdamBMorgan (talk) 13:15, 15 January 2014 (UTC)
- The "last second" nature of the problem arises from the fact that no one bothered to survey the community here about the go-ahead until the last second. I was involved in starting discussions and raising some of these same issues at WikiData from a long time ago, but no one there ever responded with satisfactory answers. No one has yet told us clearly what the intentions are, or what the product will really look like, or shown that it can be properly managed. I for one am firmly opposed to the "do it first, fix it later" approach on WikiData, because problems there are not being fixed. --EncycloPetey (talk) 22:22, 18 January 2014 (UTC)
- I for one really want the connection of pages between Wikisource and Wikdata and I'm disappointed that we've hit a problem at the last second. However, either way, CandalBot probably needs to be stopped as it is likely to be confused by activity on other Wikisources even if there is none here. - AdamBMorgan (talk) 13:15, 15 January 2014 (UTC)
- I think maybe "you're getting this thing wrong", Wikisource does NOT "need to connect pages of wikisource and wikidata", you want to do this. As best I can tell there is not community support for these changes. Challenges to process have been raised and have not been addressed. JeepdaySock (AKA, Jeepday) 11:42, 15 January 2014 (UTC)
- you're getting this thing wrong. at first we need to connect pages of wikisource and wikidata (which my bot is doing it even right now as you can see here d:Special:Contributions/Dexbot. after that we need to remove existed interwiki links (because Wikidata can handle that) my bot didn't add interwiki links correctly to wikidata (it can add right now but it had bug) and it was removing interwiki links, so stopped the bot and fixed it and undo incorrect edits, interwiki bots (who adds [[fa:Something]] links) should be stopped right nowLadsgroup (talk) 06:44, 15 January 2014 (UTC)
Beeswaxcandle, AdamBMorgan: just want to point out that CandalBot is already stopped since some time. I had clearly stated in d:Wikidata:Wikisource that it was running, and in the related talk page I asked to be notified once I had to stop it; which is what happened. Seems like somebody started doing things too early, and without knowing what they were doing :( Candalua (talk) 15:49, 15 January 2014 (UTC)
A note about displaying wikidata edits
Along with the upgrade that enable Wikidata for Wikisource, two new options were added to your User: preferences but are not enabled by default.
One new User: preference option is under the Watchlist tab - just enable the checkbox next to Show Wikidata edits in your watchlist to get that "working". I use the term "working" here for the lack of a better option since this function might be adversely affected by other enabled/disabled User preference settings/gadgets preventing the feature from operating properly (see the next option for more).
The other new User: preference option is under the Recent changes tab - just enable the checkbox next to Show Wikidata edits in recent edits to get that "working". Again, I use the term "working" here for the lack of a better option since this function certainly is affected by other enabled/disabled User preference settings/gadgets known to prevent the feature from operating properly. The option to Group changes by page in recent changes and watchlist must be disabled at the same time the new Wikidata option is enabled in order for this feature to work (pretty much making both your Watchlist and Recent changes unfriendly to say the least).
Hope that helped -- please report any additional "discoveries" concerning either new option. TIA. -- George Orwell III (talk) 00:23, 18 January 2014 (UTC)
Proposal to block bots without approval
Pending resolution and adoption by the Wikisource community of the proposed changes for the wikidata proposal. I propose any bots making edits who have not been approve via our Confirmation process may be blocked without notice, until such time as the bot owner has been granted approval for testing and/or full run. While this is already clear within the existing policy some bot owners seem to feel immune to Wikisource expectations and the process of gaining community consensus. Only those bots on this list where the next confirmation is ongoing or in the future are currently approved. JeepdaySock (AKA, Jeepday) 11:53, 15 January 2014 (UTC)
- Support, No changes here just enforcing existing expectations Controversial changes, are Unacceptable usage. JeepdaySock (AKA, Jeepday) 11:56, 15 January 2014 (UTC)
- Comment except that would be contrary to our existing policy. We clearly state that interwiki bots do not need approval to edit with or without a flag. My issue was the rate, and that there had been an express to coordinate through a page. So in the current form, I cannot support the proposal.—unsigned comment by Billinghurst (talk) 13:11, 15 January 2014 (UTC).
- Weak oppose: I was about to make the same comment as Billinghurst, per our policy interwiki bots don't require prior approval. I only add the "weak" because the "controversial changes" clause could be read as a separate condition, although the wording would imply that this is not the case (the Wikidata bots are not exceeding the pre-approved scope of editing interwiki links). - AdamBMorgan (talk) 13:21, 15 January 2014 (UTC)
- Support - in general but folks don't seem to grasp the reality here - Interwiki bots are pretty much obsolete with the advent of WikiData so that's a moot point, if not a moot policy as well, now. Interwiki links themselves are going to be a thing of the past - just go through Recent Changes - individuals (not Bots) are already removing them per Wikidata. So what then? Start reverting folks inspite of the fact interwiki is DOA? This whole thing seems forced rather than welcomed and I'm inclined to defer to EcycloPetey's expierence with WD until more clarity comes along. -- George Orwell III (talk) 20:33, 15 January 2014 (UTC)
- Support, this seems like a sufficiently contentious issue that it's worth throwing up road blocks for bot operators who aren't involved in our project and wouldn't be aware of our discussions until we have things sorted out. Prosody (talk) 23:19, 15 January 2014 (UTC)
Based on discussions in this section and related recent discussions I have modified Wikisource:Bots#Community_authorisation so that ’interlanguage link bots, are no longer precluded from community authorization requirement’. User:EmausBot was granted a bot flag and begun the process of converting to WD for these links, so this exclusion is no longer appropriate. JeepdaySock (AKA, Jeepday) 11:52, 30 January 2014 (UTC)
Pending proposals
Hi all. My computer went belly-up in early December (with about a full volume of proofread text, damn the luck) and I spent the time waiting for my Xmas spending to allow a new one to develop some ideas about things we might be able to do here. Thinking of everything took another month or so, but I am now in a position to maybe start a piece in the Wikipedia Signpost about some things we might be able to do in the various entities. The list includes about 3 dozen proposal items for consideration in various entities, so it might be a long piece, and some of them might not be particularly productive, but, hey, it's maybe a bit of a start. Unfortunately, my work week starts tomorrow, so I probably won't be able to finish the composition until Thursday at the earliest, but I hope to have at least a draft of the whole thing by the end of the week, and might be able to add some additional, somewhat specific, ideas here later this week. They will probably include proposals for a bot which indicates the relative level of completion of various index items, some sort of WF foundation community portal for transclusion in all WF entities, some sort of "core list" for each entity, indicating various things which might be among the most important or useful items for that entity, etc., etc., etc. Sorry for the prolonged absence, but, hey, sometimes that's the way the CPU crumbles. John Carter (talk) 18:25, 9 February 2014 (UTC)
Noticed this, and the associated mediawiki page, in the Tech News (see below). Since we are always interested in attracting new wikisourcerors, and addiitionally since editting here is a bit more complicated than your usual wiki, I think it could be a very good idea to look into implementing the GettingStarted extension. Documentation seems a bit sparse right now, though. Thoughts? --Eliyak T·C 02:32, 13 February 2014 (UTC)
- Comment — I'm not too keen on incorporating more bells and whistles when it seems like our current bits & pieces could use some well-needed attention/maintenance as it is. In short, I have no problems with the premise - only with the timing. -- George Orwell III (talk) 02:59, 13 February 2014 (UTC)
- Can't resist: I once worked for the poker-machine industry. As a result I consider myself largely immune to bells and whistles, and am pretty (functionally, if not factually) contemptuous of those who are. AuFCL (talk) 03:08, 13 February 2014 (UTC)
- I support the concept, but am a bit worried the extension might be rather too Wikipedia-centric. Maybe I have missed an important point? AuFCL (talk) 03:04, 13 February 2014 (UTC)
BOT approval requests
Help
Other discussions
Have title boxes become excessively wide?
I am uncertain how long this has been the case (i.e. could be related to recent software changes, or maybe I just haven't been that observant) but has anybody else noticed how wide the pop-up messages have become? This is affecting the rendering of such things as {{SIC}}, {{popup note}} etc. For example: hover here for example currently produces a popup (for me) approximating:
Does this seem normal to everybody else? Viewer2 (talk) 13:03, 28 December 2013 (UTC)
- It's normal in my applications in PSM, in the Page: and the Main namespaces.— Ineuw talk 14:57, 28 December 2013 (UTC)
- Oh well, thank you anyway for indulging my state of (I hope temporary—but don't really expect all that much!) mental unbalance. Cheers, Viewer2 (talk) 21:54, 28 December 2013 (UTC)
- Are you using a recent version of Firefox? I've seen this effect net-wide recently, and I'm blaming Firefox.--Prosfilaes (talk) 11:23, 22 February 2014 (UTC)
- I'm not too sure about that--I use Firefox 27.0.1, but the tooltips turn out just fine. Do differences in OS change the way text is displayed, by any chance? —Clockery Fairfeld [t·c] 11:54, 22 February 2014 (UTC)
- Addendum: I use Windows Vista Basic. —Clockery Fairfeld [t·c] 12:15, 22 February 2014 (UTC)
- I'm using Firefox 27.0.1 and it looks OK to me, but I'm using Mac OS X Mavericks as my OS --kathleen wright5 (talk) 12:09, 22 February 2014 (UTC)
- Wikisource is outputting the code correctly (and succinctly) so it sounds like a browser-specific issue. If you're using Firefox, I suggest using Developer Tools (built-in) to quickly determine whether it's adding spurious CSS to the original. The Haz talk 16:40, 22 February 2014 (UTC)
- I'm using Firefox 27.0.1 and it looks OK to me, but I'm using Mac OS X Mavericks as my OS --kathleen wright5 (talk) 12:09, 22 February 2014 (UTC)
Categorization of scanned books on Commons
Even if the question is about Commons, I feel it is more appropriate to discuss it here. We need to create a good categorization system for scanned books because now it is a complete mess. There are a number of related categories that are not organized in a clear hierarchy:
- commons:Category:Scanned English texts
- commons:Category:Scanned English books
- commons:Category:Scanned English books in pdf
- commons:Category:English Wikisource books
- commons:Category:Books (literature) in PDF
- commons:Category:DjVu files in English
- commons:Category:Scanned English magazines in DjVu
- commons:Category:Books in English
and probably some others.
Any ideas how to tidy it up? --DixonD (talk) 18:32, 8 January 2014 (UTC)
- Lately, I've just been using
DjVu files in English
for every book I upload but I don't know what other people do. I'm not really sure what the purpose ofEnglish Wikisource books
is meant to be, as any scan on Commons can be used on any Wikisource at any time; it can probably be emptied and deleted.Books in English
would be needed on Common for general things like photographs of book covers and so forth. - My thoughts—the hierarchy for English scans should probably be something like:
Scanned English texts
Scanned English books
Scanned English books in DjVu
Scanned English books in pdf
Scanned English magazines
(although this may be unnecessary depending on the PDF category)Scanned English magazines in DjVu
Scanned English magazines in PDF
(although this would be empty)
- It might be better to skip
Scanned English books
/magazines
and just put the categories with the file format suffix directly underScanned English texts
. - The file formats have their own categorisation, so that would need to be covered as well. The category
Scanned English books in DjVu
would need to be created, withDjVu files in English
as the parent for both this and the magazines category. So that hierarchy would be:DjVu files in English
Scanned English books in DjVu
Scanned English magazines in DjVu
- The category
Books (literature) in PDF
could be diffused by language and deleted, leaving the equivalent PDF file hierarchy:PDF files in English
Scanned English books in pdf
Scanned English magazines in PDF
- NB: I should probably mention that I created the
Scanned English magazines in DjVu
category. - AdamBMorgan (talk) 18:34, 9 January 2014 (UTC)- Thanks for your feedback! I also had doubts if we really need all those subcats of commons:Category:Wikisource books by language. Also I have some thoughts about categories like
Scanned English books in DjVu/PDF
. I don't think that it is really important what file format a book has since Wikisource users will not see any difference between pdf and djvu when working with books here. So, let's say, for some English book in pdf, we can just add it to categoriesScanned English books
andPDF files in English
and skipScanned English books in pdf
at all. --DixonD (talk) 19:54, 9 January 2014 (UTC)- On one hand, I think it might be easier to just have one category that covers both the file type and the scanned English books subject. On the other, two separate categories is more flexible and may be less complicated for general use. Having a look at Commons,
Scanned English books in DjVu
already exists but redirects toDjVu files in English
(as do the French and Polish equivalents), so you may be right. Which reminds me, the other Wikisources may have an opinion on this. (eg. What happens to commons:Category:Scanned Russian books in PDF?) If there are no problems here or from other quarters, I don't mind losing thein DjVu
andin PDF
subcats. We seem to agree onEnglish Wikisource books
so, unless anyone objects, we can probably empty and redirect that category to start with. - AdamBMorgan (talk) 14:38, 10 January 2014 (UTC)
- On one hand, I think it might be easier to just have one category that covers both the file type and the scanned English books subject. On the other, two separate categories is more flexible and may be less complicated for general use. Having a look at Commons,
- Thanks for your feedback! I also had doubts if we really need all those subcats of commons:Category:Wikisource books by language. Also I have some thoughts about categories like
I think that there are 2 different category branches, not necessary overlapping: about the content (books, magazines, etc.), and about the format (DjVu, PDF, etc.). These should be separate. One may want to look for all content in DjVu format, or one may want to look for all books, whatever the format. Regards, Yann (talk) 07:58, 3 February 2014 (UTC)
Some opinion
- Do we care whether it is "scanned" in relation to categorisation, or is that superfluous and an artefact of some time? It is the medium and the language that are relevant
- If we are following the de facto standards we would have "… in English
- I wonder if we are smarter we can look to utilise {{book}}, the Wikisource link, and the language category, and even a template there like
{{DjVu}}
to do some of the generic categorisation.
— billinghurst sDrewth 23:27, 3 February 2014 (UTC)
TOC links
Hi, I was reading The Merchant of Venice lately and I've noticed how unwieldy the page length is for this particular piece. For example, if I was doing research on this particular document, I'd like an option to skip a few Scenes that I would not need and return to the table of contents at any time, to navigate to other Scenes and Acts. So to that end, I've developed a template in my sandbox that would help me get to the TOC very quickly by clicking on the link. I could try to expand the template functions to do more than just link back to the TOC though. I think this is best used at the end of the particular Act or Scene for the Merchant of Venice, and for particularly large and unwieldy texts. Can I use this on these pages, and move it out of my sandbox into template space? TeleComNasSprVen (talk) 10:44, 9 January 2014 (UTC)
- One can break up the page to separate acts, as The Merchant of Venice as the main page, and then The Merchant of Venice/Act I, The Merchant of Venice/Act II, and the header of each Act provides for navigation between Acts and skip to a specific one similar to what was done here: Physical_Education Part V. — Ineuw talk 22:03, 9 January 2014 (UTC)
- On my screen at least there is a lot of ugly whitespace to the right of the text. Would we normally have a discussion before moving the sections off to their own subpages, lest we fear breaking outside links or any necessary due process? TeleComNasSprVen (talk) 08:44, 1 February 2014 (UTC)
- A discussion is not needed. Any external links will be to the main page, which will still be there with a set of links to the Acts/Scenes, so just need to check for internal links to any anchor points and repoint them after splitting the work. Beeswaxcandle (talk) 09:09, 1 February 2014 (UTC)
- On my screen at least there is a lot of ugly whitespace to the right of the text. Would we normally have a discussion before moving the sections off to their own subpages, lest we fear breaking outside links or any necessary due process? TeleComNasSprVen (talk) 08:44, 1 February 2014 (UTC)
Google books and archive.org
I've not edited WikiSource before, so apologies in advance if I'm asking a silly question but I've not been able to find something relevant through the help page. archive.org has a copy of journal published in 1884 Transactions of the Lancashire and Cheshire Antiquarian Society volume 1. I checked that the copyright has expired in the UK for all the authors and that checks out fine. The problem is that the PDF and djvu files start with a piece from Google books (who seem to have helped with the digitisation) in which they lay out some usage guidelines. Namely:
- Make non-commercial use of the files
- Refrain from automated querying
- Maintain attribution
- Keep it legal
Two and four aren't issues, but one and three might be. Has anyone come across this before? How was it handled? Nev1 (talk) 18:06, 16 January 2014 (UTC)
- An Account of the Manufacture of the Black Tea, As Now Practised at Suddeya in Upper Assam, By The Chinamen Sent Thither For That Purpose by C.A. Bruce has the same warning at the beginning of its PDF. It doesn't appear to have had anything special done for it, but I'm not the original contributor. Just another example of an existing work with this "restriction" page in place. Mukkakukaku (talk) 01:56, 17 January 2014 (UTC)
- Although Google put this statement on the front of the works they digitise, some consider it to be w:Copyfraud. There is no new copyright gained by scanning an image. Once a work has passed into the public domain, it can't be taken back out. Our preference is to delete the Google notice before uploading the work to Commons. However, the earlier Google scans are often poor, particularly where images are concerned and a non-Google scan from IA is preferred. For the particular volume you mention above there is this scan from the University of Toronto, which looks much better to me. Beeswaxcandle (talk) 05:14, 17 January 2014 (UTC)
- That's scan doesn't have the notice from Google either. Thanks, that's a very useful find. In the meantime, I'm experimenting with a simpler source (just one author). Nev1 (talk) 17:41, 18 January 2014 (UTC)
- @Nev1, Mukkakukaku, Beeswaxcandle Please check the Google Books section of Help:Internet Archive/Requested uploads. Solomon7968 (talk) 09:45, 4 February 2014 (UTC)
- That's scan doesn't have the notice from Google either. Thanks, that's a very useful find. In the meantime, I'm experimenting with a simpler source (just one author). Nev1 (talk) 17:41, 18 January 2014 (UTC)
Do we want to change the headers?
I ask more out of curiosity than anything else. There are some comments on this page towards this end and I've seen it raised before. If we do want a change, what form will it take? The suggestions I've read sound a lot like the German Wikisource header, which is itself a lot like a Wikipedia infobox, floating to the top right of a page. It would change the look of the whole project, even the default layout would probably have to change to match (to "layout 3", I think), but it would be able to display more information and be more flexible in general. Any new, expanded header that keeps the horizontal bar across the top it likely to take up a lot of space, which probably isn't desirable (although the French and Italian headers do something like this). As we're going through changes anyway, with Wikidata integration and so forth, and we've just passed a milestone anniversary, it might be a good time to think about what other changes we want (if any). Any thoughts? - AdamBMorgan (talk) 12:49, 20 January 2014 (UTC)
- We can't "change" the header right now even if we wanted to - any "new" header would still be subject to Dynamic Layouts. And being "subject to" Dynamic Layouts is primarily what limits the possibilities as well as the abilities of the current header template to begin with. Same story with templates added at the "end" of the textarea field - license banners and authority control bars shouldn't be "inside" of the Dynamic [re]formatting scheme either. -- George Orwell III (talk) 15:07, 26 January 2014 (UTC)
Universal Language Selector now a preference and disabled by default
WMF has changed the means that mw:Universal Language Selector (ULS) is configured and delivered to users. Whereas ULS had been set as universal, it has evidently been causing load and other issues. The change made yesterday was that ULS is now a user preference and off by default, and added in Special:Preferences on the first tab). "So what!" you may say, well for those who more widely utilised the languages, you will need to now enable that setting. For those templates that we had utilised, like {{blackletter}}, they are now not functioning as expected unless someone has the setting turned on, which is problematic for the purpose of the template. Our issues have been raised in bugzilla:46306, alongside the other discussion, and they will be considered as the developers move to a better solution. — billinghurst sDrewth 02:21, 22 January 2014 (UTC)
- Whoa I didn't even know about blackletter!
- So pretty much the problem is that by disabling this feature, we can no longer embed the custom webfont, is that right? Is there a reason we don't just add a link declaration and use Google Font's Unifraktur Maguntia (blackletter template font) instead? Or, alternatively, host that (free) webfont somewhere and create our own @font-face declaration for it in the default CSS? Mukkakukaku (talk) 03:18, 22 January 2014 (UTC)
- Blackletter does (or did) use UnifrakturMaguntia, which was originally hosted via WebFonts, and then by the Universal Language Selector when it replaced the previous software. I haven't read through Google Fonts yet but that might be an alternative, although I don't know if it will be compatible with the Wikimedia Foundation (Google is a for-profit corporation). Having the preference set to on be default on Wikisource makes more sense. I'm not sure if there is any extension we can use locally that won't keep getting overridden to meet Wikipedia's needs; I wonder if WebFonts can run like that. - AdamBMorgan (talk) 11:20, 22 January 2014 (UTC)
- It has been raised with WMF they are aware of our circumstance, and believe that once they have got the bulk of the issue under control that there is potential to grab their attention for a solution that doesn't force a whack of webfonts, yet allows our limited use situations. Here's hoping. — billinghurst sDrewth 12:55, 22 January 2014 (UTC)
- FWIW... I applied the so-call "work-around" on the BL template to no real effect except there is a 'fall-back' font now (serif). Then it dawned on me - just install the damn font locally.Bam - everything works as before and no resource loading 'tick' to boot. I know thats not an optimal solution to all this; nevertheless, it works as far as the BlackLetter template goes. -- George Orwell III (talk) 14:35, 26 January 2014 (UTC)
- ... So the "fix" is to have all users download the font themselves? Mukkakukaku (talk) 15:36, 26 January 2014 (UTC)
- Sorry for not being more clear... while downloading & installing the font is a solution to the BlackLetter template issue - its not something we should be imposing on or demanding of our vistors. I would make the inclusion of the font a gadget applied by default for everyone (if I knew how), and recall something like that was done at least once before (DjVu Sans?) somewhere. The question then becomes are there any other "well used" templates like BlackLetter that still need some sort of solution. -- George Orwell III (talk) 15:50, 26 January 2014 (UTC)
- ... So the "fix" is to have all users download the font themselves? Mukkakukaku (talk) 15:36, 26 January 2014 (UTC)
- Isn't this precisely what bugzilla:49499 was all about? Is it worth (does anybody know how to) resurrect an old report? Viewer2 (talk) 21:39, 26 January 2014 (UTC)
As per this section above, I've made a new page for the Poem of the Day. I'd appreciate it if some of you could help with this. —Clockery Fairfeld [sic] 08:56, 25 January 2014 (UTC)
- Discussion has started at Wikisource talk:Poem of the Day --Eliyak T·C 03:14, 26 January 2014 (UTC)
Appeal for a checkuser ...
Two votes missing to reach minimum number of votes to appoint a new checkuser, see Wikisource:Administrators#Nomination_for_CheckUser.
No bias towards any candidate/option/preference in voting or not-voting, just an highlight to draw attention on the topic.--Mpaa (talk) 17:46, 25 January 2014 (UTC)
- I second this post because it troubles me for these past months that we cannot find 25 users to approve a checkuser. It is disheartening that most don't realize the amount of work it takes to manage this site.— Ineuw talk 20:28, 25 January 2014 (UTC)
- fwiw... Its 25 or more by now and the stewards have his request. -- George Orwell III (talk) 03:06, 26 January 2014 (UTC)
An interesting wiki feature behavior
A paragraph commencing with the semicolon (;) will ignore a terminating colon e.g.
;This line doesn't display the end colon:
as in
- This line doesn't display the end colon
Any comments? — Ineuw talk 02:55, 26 January 2014 (UTC)
- It's the wiki notation for definition lists: see Meta:Help:List for details. Hesperian 03:21, 26 January 2014 (UTC)
- Your mistake is to think a paragraph is actually a paragraph 100% of the time when under the spell of the wiki mark-up.That is actually the start of a defined list (DL) and "it" (DT that is) is "waiting" for the appropriate (DD) to follow (so DL can close properly) as depicted below
<dl> <dt>Defined term:</dt><dd>definition</dd> </dl>
- Defined term:
- definition
- Everytime you use a semicolon and/or colon to indent and/or bold text, you are creating a [crippled] defined list. "We" shouldn't care about stuff like that because these 'symbol' shortcuts are designed for ease of editing wikipedia articles & discussions - they were never meant to be used in the faithful reproduction of works as true as possible to originally published.
- The only sure way to get some text to always be & act like a paragraph should is to use paragraph tags - end of story. -- George Orwell III (talk) 03:32, 26 January 2014 (UTC)
@Ineuw:It will ignore a colon anywhere in the line, if you wish to utilise it, then it is :
. Of course noting George's correct qualification, though we all know that it is a widespread (mis|ab)?use on wikis. — billinghurst sDrewth 06:34, 26 January 2014 (UTC)
- Thanks to all. Everything eminently clear.— Ineuw talk 07:07, 26 January 2014 (UTC)
Latest tech news from the Wikimedia technical community. Please inform other users about these changes. Not all changes will affect you. Translations are available.
Recent software changes
- Pages from Wikimedia sites now load faster in your browser thanks to "module storage", a way for your browser to save data like JavaScript and CSS on your computer to avoid downloading them again. See video. [3]
- The code used to show videos has changed. You should be able to play the video on the page, with the play button on top of the video. If you see the play button on the right of the video, or if clicking on the video leads you to the original file, please file a bug or tell User:Bawolff. [4]
- The
Special:ActiveUsers
page will be removed because it's too slow. [5] - The latest version of MediaWiki (1.23wmf11) was added to test wikis and MediaWiki.org on January 16. It will be added to non-Wikipedia wikis on January 28, and all Wikipedia wikis on January 30 (calendar).
Problems
- On January 21, Universal Language Selector was turned off on all Wikimedia sites because it makes pages load slowly. If you want to use web fonts, or write in scripts that aren't on your keyboard, you need to add it as an option in your preferences. It will be turned back on when the issues are resolved. [6]
- For about 20 minutes on the same day, there were problems with CSS and JavaScript due to high server load.
Future software changes
- You can give comments about the new version of "Winter", a proposal to have a fixed toolbar at the top of wiki pages. [7]
Tech news prepared by tech ambassadors and posted by MediaWiki message delivery • Contribute • Translate • Get help • Give feedback • Subscribe or unsubscribe.
09:46, 27 January 2014 (UTC)
Access to data in Wikidata
Hey :)
(Sorry for writing in English. I hope someone can translate this for me where necessary.)
Not long ago we enabled language links via Wikidata for Wikisource. This seems to have gone rather smoothly. Thanks to everyone who helped! But as you know this was only the start. What is actually more interesting is access to the data in Wikidata like the date of birth of an author or the year a book was published. We have planned this for February 25th (unless any issues arise). You will then be able to use the data in your articles. I hope this will open up a lot of new opportunities for you.
If you have any questions d:Wikidata:Wikisource is a good place to find help.
- I don’t see any details about how or what exactly phase 2 consists of, nor do I see any testing or problem solving. There is some talk about Dates of Birth, and billinghurst mentions authority control. JeepdaySock (AKA, Jeepday) 16:12, 27 January 2014 (UTC)
- Are they only planning on harvesting our data, or are they planning on replacing ours with theirs?
- If they are harvesting, how are they going to keep consistent if our data changes?
- If they are replacing, how are they going to validate for identical dates, and what are they going to do if different?
- As far as I can tell, they aren't going to do anything. This will allow us to pull information back from the Wikidata database. So, in theory and for example, an author page could automatically populate itself with some data if it is linked to Wikidata. What we do with it is up to us. I'm pretty sure we can set up templates to read Wikidata only if a parameter is blank at our end, so we can override locally if preferred (or just ignore Wikidata and do our own thing). - AdamBMorgan (talk) 17:48, 27 January 2014 (UTC)
- How does the automatic population of the author template (using the gadget) from Wikipedia currently work? I presume that something similar could be done from Wikidata—or even from both. Whether that's desirable is a different issue. Beeswaxcandle (talk) 04:57, 28 January 2014 (UTC)
- I think it's javascript that reads Wikipedia's API and tries to find a matching page. I'll have to find the page again to check but I think it fills in the birth & death dates by looking for the Y births and Y deaths categories. I don't entirely grok javascript or APIs right now but it probably won't be too much more work to point it at Wikidata instead and the results may well be a lot more accurate.
- Another possibility would be to create the author page then go to Wikidata and link it to a data item. If the author template is set up to pull data, it would be able to self-populate once the link it made.
- Incidentally, regarding how we will use Wikidata: I think Aubrey/Micru/TPT want book metadata to be centralised at Wikidata, so our Index pages and Commmon's Book template will pull data from there. If so, any page using the "magic" automatic header will also be pulling data from Wikidata via the Index page. Additionally, George Orwell III has been looking at Wikipedia's current {{authority control}} template, which on their side pulls its data from Wikidata unless overridden locally. I was thinking about author demographics (sex, sexuality, etc), just for information purposes, but I don't intend to even try any time soon. - AdamBMorgan (talk) 12:34, 28 January 2014 (UTC)
- How does the automatic population of the author template (using the gadget) from Wikipedia currently work? I presume that something similar could be done from Wikidata—or even from both. Whether that's desirable is a different issue. Beeswaxcandle (talk) 04:57, 28 January 2014 (UTC)
To note that I am not the expert, and do not wish to be the expert, nor wish to be considered to have the ambassadorial role for WD↔WS, I have enough to do. That said, I do know a little and I am willing to share the bits that I know.
Phase 2 of Wikidata implementation is opening up datalinks based upon the Qnnnnnn data back to enWS. It means that for a specific item, that we can extract the detail for specific property by use, example of WD in play in the wild; compare the output at an article at enWP {{Authority control|VIAF=12326295}} whereas here it produces
with the script pulling the properties from the individual it can show all the property data. It means that data populous is retrievable.
What it allow us to do is configure templates so that they retrieve. We would presumably wish to step through each major template {{header}}, {{author}}, ... It says to me that we need a Wikisource:Wikidata that allows for development and planning discussions, identification of how and where we can best utilise WD, referring specific template discussion to the relative template talk page, and maybe Help:Wikidata that explains how to use Wikidata here. We can have general discussions here as contemporary issues and changes arise, and refer onwards for the detail, if we so choose.
See also
- d:Wikidata:Phase 2#Phase 2 technical information
- Wikipedia:Wikidata
- example of an infobox with #property usage
— billinghurst sDrewth 04:11, 29 January 2014 (UTC)
- That is pretty much how I understood "Phase2" things to be as well. One problem is that WP's Authority control template uses Lua to help pull and render the Wikidata whereas our's is not near as flexible. I tried to get this going in it's sandbox but without phase2 being in place for us, of course it returned Lua script errors. Still, once I removed that portion from the template/module code, it seems to work fine except they have sources we don't have & vise versa...
- Maybe our first step should be to bring this Lua ready {{Authority control/sandbox}} version up to speed by adding all our current sources to it and then propose it for community acceptance as a replacement for what we use now. -- George Orwell III (talk) 03:20, 30 January 2014 (UTC)
- IMO, it would be better to import our additional sources to Wikidata and link from there, instead of retaining them here locally.--Mpaa (talk) 08:10, 30 January 2014 (UTC)
- Thanks guys (@Billinghurst:, @George Orwell III:, @AdamBMorgan:)to clear things up. I would like to emphasize that Wikisources can do what they want to do.
- As always, community is king. Wikidata is just a (wonderful, IMHO) tool, that can help us a lot in centralizing and structuring data. I always like the KISS example, so I fell that we should try to centralize things that are easy to do, like biographical data for Authors and bibliographical data for books. Of course, we will need to feed them in Wikidata from single Wikisources, at the beginning: then, we call these data with Lua templates from WD to WS. How we will do all this is still to be decided (of course, @Tpt: has plans). There are many data we still don't know if we want on Wikidata (SAL level, for example, or data from Index pages). I'm happy (and still laughing :-)) that @AdamBMorgan: identified me and @Micru: and @Tpt: as a single person, but, really, we are just trying hard to coordinate a Wikisource integration, as a "single" community. Wikisources are scattered with small communities all around the world, we feel that coordination and helping each other is a good thing to do, and it helps all of us. This is the core reason and rationale we asked for an Individual Engagement Grant in the first place, and the reason we asked for a renewal. Aubrey (talk) 10:21, 30 January 2014 (UTC)
HotCat reactivation fails
I found a discussion on Bugzilla relating to a solved problem with HotCat on the Commons. I wonder if this may solve our problem before I file a bug report, but it requires someone with JavaScript familiarity to look at the post and its link to the Commons HotCat code.
The instructions to resolve a non functioning HotCat in the related documentation works when HotCat is checked and saved as the first selected gadget while the Common.js is also empty. It keeps on working when adding other gadgets and common.js procedures. But when disabled afterwards, it cannot be re-enabled. It is an unreasonable expectation to deselect the gadgets and empty the Common.js each time one wishes to re-enable it. Thank you.— Ineuw talk 19:52, 27 January 2014 (UTC)
- Taking a step back ... What problem with HotCat? It works fine for me. Where? How? What? is not working about HotCat. — billinghurst sDrewth 23:11, 28 January 2014 (UTC)
- Currently HotCat is selected in Preferences but it's dead. Some combination of gadgets interfere with the re-enabling of the gadget. At times I disabled it for clarity, and nothing happens after re-selecting it. If I were to clear the Common.js code and deselect all gadgets except HotCat, then it works. Then, re-enable the few gadgets I use and return the .js code and all is fine. I checked HotCat functionality after activating a gadget and saving preferences one at a time to find the problem but everything is fine. — Ineuw talk 01:49, 29 January 2014 (UTC)
- HotCat is not dead, it is your javascript implementation that is failing. You will need to work to isolate the components that are the combination problem, not much anyone can do until you have isolated the conflicts. You may even need to comment out lines in your .js files as part of turning off gadgets in play. Nothing that anyone can easily do, unless you can find a local expert in js who can work with you on diagnosing javascript errors via the console. (I am presuming that you have looked there [8]) — billinghurst sDrewth 02:32, 29 January 2014 (UTC)
For kicks - I've tweaked our condition statement that excluded the "Page" namespace for HotCats to using the previous ns ID (104). Can't say this is for sure but I think this is but one potential problem with importing/loading gadgets from other projects that aren't really being tested for universal usage. Some sites use "page" as a designator for a 'page' as in page = 1 mainspace article while we use it as the designated name of namepace 104 Page:). This usage is ripe for all sorts possible conflicts the way I see it.
I also cut the parts about 'maxage=' since other sites do not seem to be using such qualifiers in their calls to load the HotCat script from Commons. Please let us know if anythings has changed. If not - I pretty much agree w/ Billinghurst's last. Your particular setup seems to be more problematic than others to date. -- George Orwell III (talk) 04:07, 29 January 2014 (UTC)
- Thanks for the directions. I knew that I am the only one who can test it and I knew how, it's just that at the time of posting, I didn't have a strategy or an order of disable/enable in place and was too tired to focus on it. Earlier today, I found the problem almost immediately. The CharInsert .js code doesn't interfere with HotCat. I was able to disable & enable HotCat as often as I wished. As soon as I installed my custom .js toolbar, HotCat failed. so I removed the toolbar. — Ineuw talk 01:26, 30 January 2014 (UTC)
- That observation is not unexpected and part of the issue(s) with using the old button toolbar/pre-WikiEditor schemes. In an ideal world, one that is either intentionally in alignment with wmf development or just coincidental by chance, the aim is to eventually rid ourselves from such "dumb" components from being universally loaded, slowing things down or worse - conflicting with other components/options. From what I've been able to gather, anything developed before this "Summer of Code" intiative is suspect unless somebody has been diligently updating or modifying those aspects along with the latest core improvements.In the specific case of the "old" button toolbar(s) (in simple terms) is to eventually be able to remove/reasign the option in preferences ~ editing tab ~ show [old] edit toolbars for everybody and deprecate that entire scheme once and for all in the main code.I dare folks to try disabling that checkbox and enable the next two that follow it. If results are anything like mine, the 1st will screw up editing in the Page: namespace and the second makes using the RegEx gadget pointless if not crippled to some degree or another. Regardless, until that preference checkbox is disabled, you are always loading a component that is only available for backwards compatibility; further development of or support for it is not going happen. -- George Orwell III (talk) 03:00, 30 January 2014 (UTC)
- I also checked CharInsert and it still only works intermittently whether the gadgets are active or not. However, if I move the selection to any of the built in char sets and refresh the webpage, then the User: characters always display. I hope that this may be a clue for GO3. — Ineuw talk 01:26, 30 January 2014 (UTC)
- I happen to check this today with some friends and nothing like that happened on the latest FF & Chrome browsers or under IE8, 9 or 10 (Adding the "4th option" in a User's common.js always put an end to that whenever it did occur). Whatever it is, it seems specific to you & your setup. I don't know what else to try at the momment but I'll keep thinking about it. -- George Orwell III (talk) 03:00, 30 January 2014 (UTC)
- Thanks for looking into it again. It's the only code I have in my .js. I no longer use Vector.js or Vector.css. Do you think that code in my Common.css interferes? Also, since I removed the edit toolbar, CharInsert appears more often than not, but still not reliably.— Ineuw talk 03:16, 30 January 2014 (UTC)
- I happen to check this today with some friends and nothing like that happened on the latest FF & Chrome browsers or under IE8, 9 or 10 (Adding the "4th option" in a User's common.js always put an end to that whenever it did occur). Whatever it is, it seems specific to you & your setup. I don't know what else to try at the momment but I'll keep thinking about it. -- George Orwell III (talk) 03:00, 30 January 2014 (UTC)
In the process of analyzing categories specific to PSM, I am ignorant of the outcome of past discussions on categorizing obituaries. Are we keeping this category or not? Is there anyone who is managing this? Thanks in advance for enlightenment.— Ineuw talk 20:29, 27 January 2014 (UTC)
- Hi. It is always been controversial, like the redirects which are categorized. I am not standing up for it once again, do what the community will decide about it. Bye--Mpaa (talk) 20:37, 27 January 2014 (UTC)
- Thanks for refreshing my memory. There was no community decision as far as I remember. Any proposal is irrelevant unless the salient issues are discussed to inform users of certain implications.
- As I see it, our options are as follows:
- Leave the category as is (incomplete).
- Removing the existing links, anchors, redirects and then the category. (no one like to destroy others' contributions, or I certainly wouldn't).
- Continuation of categorization is the most problematic because PSM didn't follow an organized system of notices. They appeared in dedicated obituary sections, embedded with other minor announcements, mentioned in the middle of, and out of nowhere, randomly placed to fill empty page space, or were the topic of one or more dedicated articles. Completing categorization would be a major search effort.
- Finally, this most recent post HERE really confuses. How does our efforts in providing complete author info here is related to Wikidata? Are they planning to take over the author namaspace? Much of this info is already copied from existing Wikipedia articles. Comments would be much appreciated. — Ineuw talk 21:37, 27 January 2014 (UTC)
- I have no strong opinion about the obituaries. Regarding Wikidata, we choose how much or how little we want to take from Wikidata. We could, if we want, keep data on that project and just retrieve the bits we want (like the image from Commons) and even then the templates can be set up so that a parameter here overrides data from Wikidata; or we can carry one regardless. - AdamBMorgan (talk) 12:21, 28 January 2014 (UTC)
- Wikidata is mostly irrelevant for what you are asking. Though I can see that we need to be looking to a general discussion to how we best exploit Wikidata. At this point, getting ourselves connected is the priority, and we can take out time with the exploitation. — billinghurst sDrewth 23:07, 28 January 2014 (UTC)
- I have no strong opinion about the obituaries. Regarding Wikidata, we choose how much or how little we want to take from Wikidata. We could, if we want, keep data on that project and just retrieve the bits we want (like the image from Commons) and even then the templates can be set up so that a parameter here overrides data from Wikidata; or we can carry one regardless. - AdamBMorgan (talk) 12:21, 28 January 2014 (UTC)
- Finally, this most recent post HERE really confuses. How does our efforts in providing complete author info here is related to Wikidata? Are they planning to take over the author namaspace? Much of this info is already copied from existing Wikipedia articles. Comments would be much appreciated. — Ineuw talk 21:37, 27 January 2014 (UTC)
To (mis)quote ObiWan Kanobe ... "Use the tools, Luke. Use the tools". We should be looking to utilise our installation of mw:Extension:DynamicPageList (Wikimedia) (see meta:Help:DPL for the help pages). This allows for broader level categorisation, and intersect categorisation without the need to overly categorise like is being done here, eg. we should be able to have these works in "Category:Obituaries" and in "Category:PSM articles" and spit out the intersect. Amgine is a helpful bloke, so if there is anything that we don't understand or think is a good feature , then we can ask. — billinghurst sDrewth 23:07, 28 January 2014 (UTC)
- I followed the links to the extension (a very neat job) and looked at the examples, but must study them further to understand better. Thanks — Ineuw talk 16:35, 30 January 2014 (UTC)
150,000 Validated Pages
Mike s has just made the 150,000th page validation with his edit to Page:Hector Macpherson - Herschel (1919).djvu/26. We reached 100,000 in late December 2012, which means that we are validating at an average rate of over 120 pages per day. Beeswaxcandle (talk) 20:05, 31 January 2014 (UTC)
Importing more Public Domain templates from Commons
I'm not sure if this is the right place to ask this, but I feel that managing these old works and having to remember to increase the date on each individual work every five or so years that passes by this site is quite tedious. Which is why to solve this problem especially with a large database such as what Commons has, Commons users created the PD-old-auto template, which automatically categorizes a work based on year elapsed since date of death/publication. Just want to ask if we can import that and adapt it into something similar for the works hosted here on Wikisource, easing up some of the maintenance work. Note however that the version on Commons also probably contains many modules we don't need and localization parameters designed for their international audience, which is probably better suited to Oldwikisource than here, and so we may have to remove those elements when we adapt it properly. But in the long run it would probably scale better and reduce the maintenance load already here. TeleComNasSprVen (talk) 08:55, 1 February 2014 (UTC)
- We've already got {{PD/1923}} and {{PD/1996}}, which are dynamic templates for this kind of thing. Are there others that we need? We probably need to use a Maintenace of Month to go through any plain {{PD-1923}} or {{PD-1996}} and update them. Beeswaxcandle (talk) 09:04, 1 February 2014 (UTC)
- Oh sorry, I didn't realize that those templates already existed when I did my initial check. Seems like Commons already have their versions of those templates too, located at Template:PD-old-auto-1923 and Template:PD-old-auto-1996, which as you've probably noticed are simply more specific versions of Template:PD-old-auto with a dash in front of them and year as the parameter. Perhaps I should create a Template:PD-old-auto page and disambiguate it into these two templates and give a friendly warning when someone tries to use it, so we don't confuse any Commonsies coming here. TeleComNasSprVen (talk) 17:19, 1 February 2014 (UTC)
- On your second point, I haven't found any other Public Domain templates that we currently lack from Commons when I did the checks and comparisons, but I suppose I could ask the site admins on Commons somewhere like the Village Pump to perhaps query their database for all their Public Domain templates to do a proper comparison. TeleComNasSprVen (talk) 17:21, 1 February 2014 (UTC)
- Comment All PD templates on Commons should appear in either Commons:Category:PD license tags or one of its subcategories. --Stefan2 (talk) 22:40, 4 February 2014 (UTC)
New {{Border}} template
I'm not sure if this is the place to post this, but I found the {{Border}} and {{Box}} templates not able to suit my needs, so I completely overhauled the former, keeping the defaults. (Note: I know some editors create a 1x1 table and use the {{ts}} template but by HTML standards that's deprecated.) There are now 8 additional (and all optional) parameters that you can set in addition to the content property itself. I also put examples in the documentation which I hope will clear a few things up. I hope this can suit everyone's needs. Please feel free to write on the Discussion page if you have any issues or something needs to be clarified. It's one of my first templates, but I'm fairly confident it works the way I intended. Thanks, The Haz talk 19:25, 1 February 2014 (UTC)
- Thanks for the update and notification. We generally haven't had a notification process, as we would normally rely on notes being added to the respective template talk page, and if it is possibly controversial, then discussion beforehand, and usually create the sandbox and components. Hopefully you have tested that nothing was broken. — billinghurst sDrewth 06:20, 2 February 2014 (UTC)
- I did test it in the sandbox beforehand. It was only being used on two pages at the time, so I felt that there would never end up being discussion unless I just went and implemented it. And of course, I made sure to maintain the default as the old format (which was not customizable), except for the actual border color (changed default from dark grey to black). And of course I wrote up some documentation with examples.The Haz talk 21:25, 2 February 2014 (UTC)
- Would like to learn how to use this template properly. Perhaps someone in the know give me a hand with THIS PAGE? Thanks in advance.— Ineuw talk 19:06, 3 February 2014 (UTC)
- I did test it in the sandbox beforehand. It was only being used on two pages at the time, so I felt that there would never end up being discussion unless I just went and implemented it. And of course, I made sure to maintain the default as the old format (which was not customizable), except for the actual border color (changed default from dark grey to black). And of course I wrote up some documentation with examples.The Haz talk 21:25, 2 February 2014 (UTC)
- Done. Here's the code
{{border|2=500px|9=30px|
- As noted on the template documentation page, this creates a 500px box, and changes the padding to 30px. Padding is the space between all four edges and the edges of the next piece of content within the box. It makes that space unavailable in a manner of speaking.
{{border|7=center|
- This creates a second box, with maximum width (where the padding from the first box ends), and centers the content inside of this second box.
- Note that I didn't need to center content within the first box as the second one takes up the entire width of available space (the default). The way you were trying to do it was simply by setting a fixed width of the inner box as well. However, you wanted the same margin all around so padding of the first box is the way to go for that. The height of each is determined by the content. The second box is as tall as the image, plus a default 5x padding times 2, and the first box is the height of the second plus the 30px padding times 2.
- As an aside, you should think about cropping out just the image from that page and uploading it to Commons. Then type the text on that page. See what I did here. The Haz talk 22:53, 3 February 2014 (UTC)
The following discussion is closed:
Request at meta subsequently withdrawn. ~ DanielTom (talk) 20:20, 3 February 2014 (UTC)
Redundancy
As somewhat of a newcomer to WS, but a longtime editor at the other Wiki projects, I was surprised to find how many redundant formatting pages exist here. To me it's a bit of a waste of time not only to write out the different pages but for someone new to search through them. Here is one examples of redundancy: how to use <pagelist />
:
- Index Pages / Pages
- Page numbers / Page numbers in the Index namespace
- ProofreadPage / The <pagelist/> tag
People obviously put a good amount of time and effort into each of these pages. What I am saying is that the help section could probably get a restructuring (a new tree) and then a deletion of anything not needed. My hope is that with a good set of help pages that only have information in one place means that the information is easier to find, and also importantly, easier to fix/update as needed. —unsigned comment by Hazmat2 (talk) .
- I think there is general recognition that our help pages are in dire need of some tender loving care; we're just waiting for someone to come along with some of that love to lavish. You? Hesperian 08:14, 2 February 2014 (UTC)
- I would be up to the task, eventually. The Haz talk 16:30, 2 February 2014 (UTC)
- If you wait until you know it all, then you forget what parts need to be updated (at least in my experience). If you find something is wrong, if you know how to make it right, fix it. If you don’t know how to fix it ask at Wikisource:Scriptorium/Help then fix it. JeepdaySock (AKA, Jeepday) 16:08, 3 February 2014 (UTC)
Proof-reading multi-column scans
I'm trying to transcribe this three-column document at Index:The Copyright Office, Policy Decision on Copyrightability of Digitized Typefaces.pdf, and I'm finding it fairly frustrating to deal with. With the wikitext on the left and the scan on the right, the scan is barely readable, and not quite readable enough for me to be confident I'm not just guessing at the spelling. With the wikitext below and the scan above, I can only see a short, fixed-height strip of the scan.
It would be much nicer if I could look at one column at a time in the side-by-side layout; then I could get decent magnification and a nice number of lines ... —SamB (talk) 02:07, 3 February 2014 (UTC)
- If it were my file, I would run the file through OCR using real software, with columns in the layout. This would help immensely. The Haz talk 02:59, 3 February 2014 (UTC)
- Did you try the "zoom in" function? Eg. on the menu, click 'proofreading tools' and then the icon that looks like a magnifying glass with a + symbol. Mukkakukaku (talk) 03:09, 3 February 2014 (UTC)
- Hmm, yes, that does help; I think I tried it before but the scrollbars were screwy and I didn't notice I could pan using the mouse ^_^. I still find this a bit clumsy compared to if there were a way to slice the image into columns explicitly, but it does work okay. —SamB (talk) 21:18, 3 February 2014 (UTC)
- Sorry I misread the issue the first time around. Mukkakukaku's suggestion is much better than mine. ;-) I'll add that you can still grab and drag the image when zoomed in as well. The Haz talk 04:24, 3 February 2014 (UTC)
- You may try over/under edit & proofreading because the natural appearance of the text is larger.— Ineuw talk 04:45, 3 February 2014 (UTC)
Latest tech news from the Wikimedia technical community. Please inform other users about these changes. Not all changes will affect you. Translations are available.
Recent software changes
- The latest version of MediaWiki (1.23wmf12) was added to test wikis and MediaWiki.org on January 30. It will be added to non-Wikipedia wikis on February 4, and all Wikipedia wikis on February 6 (calendar).
- Global AbuseFilter rules are now active on all small wikis. [9] [10]
- The buttons used in pages like log-in, account creation and search are now using the same colors and styles. [11] [12]
- You can now link to diffs using
[[Special:Diff/12345]]
and similar links. [13] - There is no longer an option to hide tables of contents on all pages. [14]
- Searching in the
File:
namespace on Wikimedia Commons is now faster, after a bug was fixed on January 29. [15] - All Wikimedia wikis now have high-resolution favicons. [16]
VisualEditor news
- You can now see a list of keyboard shortcuts by pressing Ctrl+/ inside VisualEditor. [17]
Future software changes
- Edits and files hidden with the Oversight tool will be moved to the RevisionDelete system. The Oversight tool will then be removed from Wikimedia wikis. [18] [19] [20]
- For languages where not all sister projects exist, you will be able to link to other language projects using double interwikis (
:ko:v
,:v:ko
, etc.). [21] - It will soon be possible to use the GettingStarted tool on other wikis than the English Wikipedia. You can translate it on Translatewiki.net. [22]
- You will soon be able to include the Special:Contributions page into other pages. [23]
- You will be able to see where a file is used inside MultimediaViewer, the new tool for viewing media files. [24] [25]
- It will soon be possible to send MassMessage messages using the API. [26]
- You will soon see audio statistics on the Special:TimedMediaHandler page. [27]
Tech news prepared by tech ambassadors and posted by MediaWiki message delivery • Contribute • Translate • Get help • Give feedback • Subscribe or unsubscribe.
08:30, 3 February 2014 (UTC)
Best practices for scan attribution
What are best practices for attribution of scans? I uploaded a book scanned by Google and its front page is a request that they get some kind of attribution for this. Does this get transcribed into Wikisource? Thanks. Blue Rasberry (talk) 15:19, 3 February 2014 (UTC)
- No and we usually delete that page from the scan as well. (Some people remove the internal watermarks but I haven't done so before—I don't think I have the software to do so—so I can't really comment on that.) I would normally attribute the source on the File: page on Commons, as with any file, but we are not required to do so anywhere else. - AdamBMorgan (talk) 15:31, 3 February 2014 (UTC)
- As AdamBMorgan noted, Google can request, but not require, that we attribute scans to them. A scan of a page in the public domain is ineligible for copyright in most countries, including the U.S. (There is essentially nothing original and no new information contained in the image, and the "computer code" to produce the image isn't eligible for copyright, so the image itself isn't eligible.) I delete the Google page and the front and back covers (if they are "library" covers). The few books I've downloaded are not simple texts and have had so many mistakes in the OCR due to strange formatting, that it was easier to clean the file up with ScanTailor and run it through OCR again to verify/fix mistakes and make a DJVU file. It's my way of proofreading, but it's time-consuming, and I don't recommend the entire process for most books. The Google watermark is naturally stripped from the pages of the book.
- With all that said, the fine folks that do all this scanning are hardworking individuals so I feel that it would be wrong not to attribute it to them somehow. When I upload a file to Commons, I fill the
|source=
section of the {{Book}} template with something such as[http://books.google.com/books/about/The_Voyage_Out.html?id=aRhmAAAAMAAJ Google Books]
. Note that their logo is not eligible for copyright either as it doesn't meet the threshold of originality, though it is still a registered trademark and should be noted as such on its Commons page with {{Trademark}}.The Haz talk 21:25, 3 February 2014 (UTC)
Do you think Wikipedia/Wikisource should have a Kindle (.mobi) putout?
(also posted on Wikipedia)
Hi! I am a Kindle user. I use Kindle in any my free time available because of its convince and feel. It uses e-ink, the look of which is very close to ordinary ink on paper. Sometimes, I use it to read some long Wikipedia article, but I have to convert it manually because the clumsy PDF format works poorly on my kindle, and I believe it's also works badly on other portable devices when compared to .mobi. If Wikimedia projects support this format natively, I think it will be very continent for Kindle and other e-readers users like me, and promote wiki content to be read by users in a further depth. What is your opinion?--The Master (talk) 15:46, 3 February 2014 (UTC)
- I would be for it but what about other devices that are not like "Kindle"? Kindle just came out with a superior version of its former self. BTW, I happen to like .PDF files a lot as well as .djvu files. —Maury (talk) 16:11, 3 February 2014 (UTC)
- First, sorry my post is lengthy. I just got out of my ACLS certification class and am a little amped. I agree with the Kindle format idea and had just posted something about this yesterday. I definitely prefer my e-ink Kindle over any computer or tablet for most of my books, so I am a bit biased. With that said, the Kindle is very popular and is rather easy to output pages to. Compared to formatting for PDF it's a cinch. First, the Kindle can handle HTML directly with a simple renaming of the file extension to .txt (strange, I know). I'm pretty sure mine handles HTML files without changing the extension. More importantly, the mobi/azw file format is in XHTML. It's not some proprietary format that requires licensing or crazy conversion. It's why there are scripts to convert pages to it readily available. With that said, having an output option would be helpful as it would be used to only format the book content, such as the EPUB button does now. Kindle Format 8 is also out now which supports HTML5 and CSS, and is in use on the Fire and on the newer e-ink Kindles.
- Of course, any of these formats can also be opened on any iOS or Android device as well; many of my classmates use the Kindle app despite not actually owning one because they like the app layout, library, store, etc. I'm rambling a bit here, but would like to say that while this fact is not a reason pro-Kindle it no longer goes against it. I read through some of the old pages going against Kindle formatting, but they were written some time ago and many things have changed.
- The most difficult thing I can think of would be how the conversion script handles CSS and whether we separate out mobi and azw files (which have drifted apart in spec). I can put a little time each day into finding a good script and editing/testing it, if it's something we would be allowed to implement. My thought is that with Kindle being a market leader, Wikisource should eventually support it directly. The point is to get these books out there, so why tell a good number of users that they have to convert the files themselves beforehand? (This is exactly what the help pages currently say to do). Just my two cents... plus a few shillings. The Haz talk 20:18, 3 February 2014 (UTC)
- My personal thoughts on Kindle or any other portable e-reader is that I don't care about any of them. I have worked here on en.ws for several years and I read and transcribe the books I want to read right here. It's a reading/transcription hobby and preserves books for the future generations rather than just reading for ones own self. Too, the .PDF Reader is free but I never use it since I have Adobe Acrobat Pro, newest version but that is for any work here too. It could, however, be used to read .PDF files. There are many options with this program. One can extract all image files if desired, or any number of pages, especially the garbage pages in some .PDF files and all watermarks. I once considered Kindle and others and concluded, "what for" -- just because others are buying the latest gadgets -- the newest tech stuff, that really is not needed? So why is it that you purchased Kindle? Isn't there enough to read here? Besides, it will be out-dated soon enough. Why waste money? —Maury (talk) 22:39, 3 February 2014 (UTC)
- To respond to some of your comments/questions: I purchased a Kindle a few years ago specifically to read e-texts without the strain from backlit screens. I also find reading on screens to be annoying and distracting and e-ink didn't have the same effect. You use the phrase "latest gadgets" but the Kindle has been around almost seven years now—the mobi format, almost fourteen.
- I stated, "my personal thoughts", so that all I have typed is what *I* like or dislike.
I certainly do understand the point of "strain" as in eyestrain. I requested an eyestrain background and text colors for our editors here which was done in .js. I am using it now. Regarding how long Kindle has been around my statement about the "latest gadgets" refers to "upgrades" for Kindle (which recently came out) and any other e-reader that comes out in the near future and we all know technology moves fast. There are constant "upgrades". As you yourself state, Kindle has been around almost seven years. How many "upgrades" is that? There is a constant cost. How much money has Kindle made with each "upgrade" totaled. Upgrading is often constant, perhaps every new "upgrade" for some people. I believe that is a lot of money spent for Kindle's constant growth. This is natural in the world of technology. "Personally, during all of that time I have not spent even a penny here on en.wikisource and have covered a lot of book reading.-wmm2
eReaders have become widely used and Kindle held a 55% market share (Publishers Weekly).
- I stated, "Personally" which means I don't care what Kindle's market shares are.-wmm2
(It's also interesting to note that only 6% of people read books on a computer at that time.)
- I've made my statement on what I personally use and I have spent nothing. For every "upgrade" or new e-reader, I therefore have saved that much money that e-readers cost others. I never stated that Kindle and other e-readers are not nice to have. *I* just have never had a need for any of them because of working here since about 2006.-wmm2
In the end, I'm not saying we should expend loads of energy on this, or that anyone disinterested should expend any. What I am suggesting is that we incorporate a small conversion script the same as is done for ePub.
- I am not opposed to that but I "personally" have no need for it.-wmm2
The file would be created on demand the same way it is now. For now I'll probably look into adding this as a gadget or edit my own js file to call a script to convert just to see how well it could work.
- Go for it, I would welcome it.-wmm2
On another note, one of your comments seems to imply that people shouldn't come to WS just to read books.
- No, wrong, don't assume. Your key words of assumption are, "seems to imply".-wmm2
You wrote that what you do "preserves books for the future generations" and I agree wholeheartedly and do the same.
- I made that personal statement because many of my youthful years I wanted certain books and could not get them other than Interlibrary loan and that often did not work as I desired. I take a deep happiness and a pride that WP & WS exist so that old and rare books can be replicated and archived digitally and in that I helped with a tiny portion of creating and saving for born and unborn generations who will not have to experience what I have experienced in my youth before Internet.--wmm2
Nonetheless, why shouldn't current generations be allowed to enjoy the texts found on here in any way they see fit, without a requirement to transcribe or edit?
- Hey, I am all for it. I never stated otherwise. My statements were my "personal thoughts" and in no way am I opposed to what you have stated. If I were rest assured I would say so.-wmm2
Someone even wrote on the Help page
- Not me. -wmm2
that "[r]eading is the main point of Wikisource."
- Well, I will state that all technology is constantly being upgraded and refined and that includes statements made here, on wikipedia, and elsewhere. en.ws is constantly being refined and I do not refer just to reading or editing books here.-wmm2
To this end, offering more on-the-fly formats could be great for capturing new readers and contributors in the future.
- I agree with that. Can you work it out? If so then I ask that you please do so. It would be a part of upgrading and refining.-wmm2
Anyway, I understand all too well the, ahem, addiction, of transcribing texts as you do and I would like to thank you for all the work you've done here.
- addiction? What in life is there where masses of people love that is not and "addiction"? It is not so much of an addiction for me as it is an appreciation of being allowed to work here to make the world a little bit better. I further will state an assumption and that is that you drink coffee or tea and probably for the caffeine. Do you not consider that to be an "addiction"? I do not like coffee nor smoking, nor drinking any form of alcohol. Been there, done that decades ago, didn't like it, am not addicted to any of it. Are not any of those an "ahem", an addiction for you? I don't think that the love of things is necessarily an "addiction". --wmm2
It definitely doesn't go unnoticed. Without people that enjoy doing this work, Wikisource simply wouldn't exist.
- I cannot speak for others here but I will state for myself that working here creating and refining and changing history itself is doing something worthy for myself and for others. I love books and always have loved the knowledge within them. They teach, they explain, the explore which I love to do both in the real and virtual world. I am retired and well off so I have the time to do what I like to do and I do. Any WP or WS is just a part of what I love to do with my life. It is peaceful here unlike what it was in combat in Vietnam. Construction is the opposite of destruction and I like that.
I would just like to expand the accessibility of the site a little.
- Understood and I would like for you to be able to do that as well but not think of any of as an "addiction" in thought nor reality. Like I stated, "Go for it and keep on going for it. What do you have to lose in trying and to keep trying? I think it would make the world a better place. I really do. If I knew how I would join in with you. So I would like to ask questions and they are, "Now what are you going to do? Are you going to quit your ideas and dreams and leave with your head hanging low or are you going to keep on until either you or someone else succeeds with your present thoughts? More, I challenge you to do as you dream, and I say this with the hope that you will and that you succeed. Kindest regards, (wmm2) —Maury (talk) 06:45, 4 February 2014 (UTC)
The Haz talk 00:05, 4 February 2014 (UTC)
- The matter of
.mobi
files has been discussed here a few times WS:S search. @Jeepday, @JeepdaySock: has commented upon this before, and maybe they can bring us up to speed on their playtime. I wouldn't have thought that the issue was insurmountable as a file format for the book tool it is presumably something has to be done though I think that it was also an area of change. Note that Amazon has developed an .epub to .mobi converter called KindleGen (supports IDPF 1.0 and IDPF 2.0 epub format, according to the company).- Should have been clearer. We do have an
.epub
version available for all our works, and that is manipulable. — billinghurst sDrewth 15:05, 4 February 2014 (UTC)- Sounds good, I'll have to look into that later. I had seen many of the prior posts but it had seemed like a dead topic to me. And I wasn't the person who posted it here originally either, just someone interested in carrying it out eventually. The book tool isn't accessible at the moment so I can't look at it right now, though I know that I wasn't able to get either the "book creator" on the toolbar or the one you turn on via preferences to work correctly. I think only the one via preferences (the one labeled for Wikisource) was able to do ePub for the books I tried and neither could make a PDF or ODT (only had the header template from each page and nothing else. I'm also surprised to find that an ePub exists for each book. That uses a lot of server power and space. When does the ePub get made? And if someone fixes something in a book, does the ePub automatically update itself? I would have assumed they got made on the fly. Thanks for the information, The Haz talk 20:45, 4 February 2014 (UTC)
- Should have been clearer. We do have an
- The matter of
Deletion policy for Author pages
I just posted this at Wikisource talk:Deletion policy#Author pages:
What happened to Wikisource:Scriptorium/Archives/2008-02#Author-PD-none? In particular, we still have:
- Copyright violation: Content which is a clear and proven copyright violation, or content previously deleted as a copyright violation, or author pages for authors whose works are all copyrighted.
(red mine) here, though it looks like it was agreed to relax that to something more like:
- Copyright violation: Content which is a clear and proven violation of Wikisource copyright policy, or content previously deleted as a violation of Wikisource copyright policy, or author pages for living authors whose works are all violations of Wikisource copyright policy.
(green mine, and possibly factored out into a new critereon).
Did you guys forget to actually make the change, or what? —SamB (talk) 05:54, 4 February 2014 (UTC)
But you should respond there, not here! —SamB (talk) 06:14, 4 February 2014 (UTC)
No eBook available
Are we beginning to run out of source materials? Google constantly shows No eBook available. —Maury (talk) 14:34, 5 February 2014 (UTC)
- That pretty much sums it up for those of us in Australia. It seems to me that Google Book no longer makes stuff available to us for download ever. Hesperian 16:27, 5 February 2014 (UTC)
- Right. I think now that any worthy texts are being sold with a new cover and an upgraded date and perhaps with an new introduction by someone to claim copyright (but only on that introduction). There are some really good looking books (cover art) being sold now. Perhaps there are some few left-over books but the ones of some quality, are edited by various persons and are sold on Amazon and on Google and elsewhere. To keep some books as Public Domain we get the scraps that have the obvious mistakes. Even HathiTrust carries Google's watermarks which isn't new but what is new, I firmly believe, is that I now find a lot of "limited view" on those books. Have an exceptional Day, —Maury (talk) 17:15, 5 February 2014 (UTC)
- It shows To continue with your download, please type the characters you see below: for me and when I type download starts. Solomon7968 (talk) 17:46, 5 February 2014 (UTC)
- P.S. @Maury, Hesperian See this discussion regarding downloading PDF from Google Books + watermarks and warning page removal + Internet Archive upload for Djvu conversion. Solomon7968 (talk) 04:10, 8 February 2014 (UTC)
- It shows To continue with your download, please type the characters you see below: for me and when I type download starts. Solomon7968 (talk) 17:46, 5 February 2014 (UTC)
- Right. I think now that any worthy texts are being sold with a new cover and an upgraded date and perhaps with an new introduction by someone to claim copyright (but only on that introduction). There are some really good looking books (cover art) being sold now. Perhaps there are some few left-over books but the ones of some quality, are edited by various persons and are sold on Amazon and on Google and elsewhere. To keep some books as Public Domain we get the scraps that have the obvious mistakes. Even HathiTrust carries Google's watermarks which isn't new but what is new, I firmly believe, is that I now find a lot of "limited view" on those books. Have an exceptional Day, —Maury (talk) 17:15, 5 February 2014 (UTC)
"PDF Scans derived from Google Books contains a warning which needs to be stripped off before adding the text to IA for facilitating proofreading for Wikisource. These are normally done by the user/bot "tpb" (not affiliated to Internet Archive): we dream of a way to suggest tpb books we're interested in; we can start accumulating Google Books URLs here and then maybe tpb at some point will fetch them." @Solomon7968. I am not positive but I doubt that I ever leave that page in any upload to here. I simply dislike it. Too, I am working with .djvu scans now. I do not like leaving watermarks but there are times when all pages have them. How are those removed from a .djvu file? Thank you for the heads up though. Kind regards, —Maury (talk) 17:08, 8 February 2014 (UTC)
- One way: The watermark is a separate image from the page. If you are using the Google Books PDF scan with Acrobat Pro (or Nitro PDF Reader) you can use the "export all images" feature. Unlike the "save as image" feature, which saves each page as an image, the "export all images" feature extracts each separate image from the file. In our case, that will extract 2 images for each page, the scan and the faux-watermark. If you extract into its own folder and sort by file size, you'll see that the Google logo images are small 2-25KB while the scans are much larger. This makes it easy to select the Google logo files and delete them. I've been told SomePDF Images Extract supposedly does the same, but I haven't used it. It probably tries to install some junk as well as the main program, based on the look of the website. The Haz talk 22:57, 9 February 2014 (UTC)
- I forgot to mention that you can delete the first pages from the file, getting rid of the warning. Most PDF programs can handle something like this. However, you can always use PDF Helper to quickly pull each page as a separate PDF, delete the files you don't want, and use the software to put them back together. It's low-tech, but free and I've been using it for years when I don't use Acrobat. (It has this strange side effect of removing any password protection from most PDFs that get processed.) The Haz talk 23:02, 9 February 2014 (UTC)
- Thanks Haz I was totally clueless how to remove the embedded watermarks from PDFs. Now I guess the only thing to worry is to find a suitable replacement of user/bot "tpb" (Re: The work tpb has started seems to be ended years ago, but I'm not sure. Lugusto 19:03, 7 February 2014 (UTC)) to mass upload scanned PDF books from Google Books. Any ideas? Solomon7968 (talk) 05:19, 10 February 2014 (UTC)
- Important: Relevant discussion with developers for Google Summer of Code 2014. See mw:Mentorship programs/Possible projects#Google Books > Internet Archive > Commons upload cycle.
- Thanks Haz I was totally clueless how to remove the embedded watermarks from PDFs. Now I guess the only thing to worry is to find a suitable replacement of user/bot "tpb" (Re: The work tpb has started seems to be ended years ago, but I'm not sure. Lugusto 19:03, 7 February 2014 (UTC)) to mass upload scanned PDF books from Google Books. Any ideas? Solomon7968 (talk) 05:19, 10 February 2014 (UTC)
George Orwell III and I use Adobe Acrobat Pro. GO3 has version 10 and I upgraded to version 11. I think I can do just about anything with version 11 regarding .PDF Files but .PDF files are not as desirable here because .DJVU is for better text retention. However, .DJVU is not good for images. I upload .DJVU files here to en.ws now and the text I have encountered is excellent. So, I upload cleaned .JPG files that I extract (using Acrobat Pro v.11) from .PDF files to WikiCommons and insert them into .djvu files as I did today. I have a .DJVU book here on en.ws now. Moments ago I uploaded .JPG images and inserted them into that book. I am only recently learning how to handle .DJVU files including how to make .PDF files from it, which I don't do. What I do not know and need to learn is how to extract any page from .DJVU files. [DJVU= kidding here=> "Disk-Jocky-Virtual-U2"] I do recall that Ineuw stated something about using -d which I guess means delete but I only know commands like that from a U.Va. "UNIX" system from decades ago. —Maury (talk) 05:51, 10 February 2014 (UTC)
- I'm sorry--I wasn't saying you should upload the PDF, or even the image files. Since Google Books has PDFs and not DJVUs, I just use Adobe to export those images to compile DJVU files and upload the DJVU. I've never taken apart a DJVU the way you are asking. If it's Google Books it might take less time to upload a new DJVU file over the old one made from the images in the PDF, but I haven't done that so I'm not sure. Have you already skimmed the information and links here? Help:DjVu_files#Manipulating The Haz talk 06:28, 10 February 2014 (UTC)
- Oh and as for deleting pages with the "-d" command, you can do that in Windows as well. Similar to Unix, the command would be
djvm -d filename.djvu pagenumber
. You can use the command line or create a batch file because djvm doesn't have a GUI as far as I know. Or, if you have Linux you can do the same from the terminal. The Haz talk 06:34, 10 February 2014 (UTC)- No big deal, my friend, I was chatting while attempting to learn from others here and I have learned. I used to upload .PDF files after I have cleaned out all watermarks and the "warning" page along with some other pages with garbage. I would extract the images and clean any need scribblings by someone who like to draw on book pages then place the cleaned images back in to the .PDF file then send it to Internet Archives and let it derive many formats as have others here. I now download .DJVU files for the text plus a fast move to WikiCommons, and then to here at en.ws But how would I use -d in Windows XP? I see no such option. Do you mean in DOS? If so how do you get DOS to work with a .DJVU file? What 3rd party program are you using with Windows to be able to use a unix command of -d with a .DJVU command? No, I do not have Linux. I use regular old Windows XP and have never seen
djvm -d filename.djvu pagenumber
used in a program with this. Have a wide awake wonderful day, I am now sleepy after working half a day and all night. Thank you, perhaps I will understand what you mean when I am awake. —Maury (talk) 10:43, 10 February 2014 (UTC)- In WinXP you can either click Start, Run, and type "cmd" to get to the faux-DOS prompt (DOS is no longer in Windows) or Start>Program Files>Accessories>(System)>Command Prompt. That's the same as typing "cmd" at the Run prompt though. Microsoft's Command Prompt FAQ, if you don't know or have forgotten some of the commands to get around as they differ a little from Unix. However, djvm is part of DJVULibre so you'll have to have that if you don't already. If you're going to do this to a bunch of files with the same pages deleted you can make a batch file that looks for any file with the DJVU extension as noted above. Leave the batch file in it's own directory and put whatever DJVU files need to be processed there. Double click the file and it will run the DOS commands quickly so you don't need to type anything out. It's also helpful because you can write batch files (.bat) right in Notepad. See Writing Your Own Batch File and Creating Bat file to execute a command on all files in folder. The Haz talk 15:48, 10 February 2014 (UTC)
- OH! Okay, I remember this now! I just never use it any longer. It is like DOS (disk operating system) and I know how, if I remember correctly, to make .BAT files. I used to have a program called DCOLOR. It would make .BAT files red in color and command (COM) files green, and other files such as text were colored white which made working in that environment even easier. There was also an editor in there. There was also a great search command. I haven't used that "faux" fake/false area for decades after win3 came out. I did not know anyone used that area anymore. Younger guys learn computer codes &c in school but when I was in high school Physics class we had to learn how to use a slide rule before we could actually work since no digital calculator for sale existed. At least we did not have to use an abacus! Thank you very much!! Kindest regards, —Maury (talk) 16:05, 10 February 2014 (UTC)
- No big deal, my friend, I was chatting while attempting to learn from others here and I have learned. I used to upload .PDF files after I have cleaned out all watermarks and the "warning" page along with some other pages with garbage. I would extract the images and clean any need scribblings by someone who like to draw on book pages then place the cleaned images back in to the .PDF file then send it to Internet Archives and let it derive many formats as have others here. I now download .DJVU files for the text plus a fast move to WikiCommons, and then to here at en.ws But how would I use -d in Windows XP? I see no such option. Do you mean in DOS? If so how do you get DOS to work with a .DJVU file? What 3rd party program are you using with Windows to be able to use a unix command of -d with a .DJVU command? No, I do not have Linux. I use regular old Windows XP and have never seen
What is this?
Anybody have an idea about what this is: Index:Quiz11111.pdf? It's in Greek, I think, but as I was preparing to tag it for transwiki, I took a closer look and thought that it doesn't really look like something that is potentially out of copyright, or really a candidate for inclusion on any wikisource. (The name of the file doesn't help either.) Any ideas? Mukkakukaku (talk) 02:34, 6 February 2014 (UTC)
- It's a high school physics test on motion. The uploader is claiming it as own work. However, as it's in Greek, I have deleted the Index from here. Beeswaxcandle (talk) 02:58, 6 February 2014 (UTC)
Same person? Ali ibn Husayn
Just noticed these two authors, and they could be the same person.
On Wikipedia: Ali ibn Husayn Zayn al-Abidin
Perhaps someone has better knowledge of this? - Danrok (talk) 17:26, 7 February 2014 (UTC)
- That Wikipedia article does have them as two names for the same person. Ali ibn Husayn = name. Zayn al-Abidin = Honorific title used after name. The Haz talk 06:43, 8 February 2014 (UTC)
- I should mention that that WP article also has two dates of death and two dates of birth. The ones in the lead are different from the ones in the Infobox. The Haz talk 06:44, 8 February 2014 (UTC)
- I would say merge to the longer name, and make the shorter a redirect. You may need to make any fixes at WD. — billinghurst sDrewth 11:47, 8 February 2014 (UTC)
Two somewhat large moves to Translation: space
Could someone with a bot flag move Mishnah & Shulchan Aruch and their subpages to the Translation: namespace? There are too many subpages to be moved in the usual way. If possible, subpages that are still linked to after the move should be kept as redirects until they can be cleaned up. --Eliyak T·C 06:29, 9 February 2014 (UTC)
- I tried one, see Translation:Mishnah/Introduction. Two points to be considered:
- pls note the Translation:Mishnah in the header part, is that OK? I also saw a {{Translation header}} around, should that be updated (maybe in a 2nd round)?
- what about redirects? is it OK not to leave redirects for sub-pages in case they are automatically handled by relative links? I remeber some discussions in the past but I cannot recall the outcome; it was about cross-namespace redirects not allowed vs. what to do if with references from outside this wiki--Mpaa (talk) 21:03, 9 February 2014 (UTC)
- Very good point. By going through various Google searches for incoming links to each of the 6 main subpages and their subpages (e.g. [28]), it seems there are somewhere between 200-500 incoming links to subpages. I was originally going to suggest using {{dated soft redirect}}, but since there are a significant number of incoming links, I would like to suggest a different approach - use {{soft redirect}}, but have the redirects "self-destruct" after a couple years by using {{timed}} in conjunction with {{sdelete}}. As far as the title issue goes, I think that could be corrected in a 2nd round as you suggest. --Eliyak T·C 00:10, 10 February 2014 (UTC)
- Update: my above naive idea of a time-triggered template will not work as is, because the page will not update without being purged first. --Eliyak T·C 03:50, 10 February 2014 (UTC)
- Just a thought: Make a category specifically for use by a "redirect until" template (which probably also has to be made). You can throw them all in one category. Then use a bot once or twice a year that runs down the category and checks the date on the template of each page. If the current date is past the one on the template, the page gets nuked. The Haz talk 06:41, 10 February 2014 (UTC)
- I will wait for some more comments then I'll proceed, creating something similar to this: Help:Redirects#Redirect_sorting, unless some other instructions will be posted.--Mpaa (talk) 20:24, 10 February 2014 (UTC)
- Just a thought: Make a category specifically for use by a "redirect until" template (which probably also has to be made). You can throw them all in one category. Then use a bot once or twice a year that runs down the category and checks the date on the template of each page. If the current date is past the one on the template, the page gets nuked. The Haz talk 06:41, 10 February 2014 (UTC)
- Update: my above naive idea of a time-triggered template will not work as is, because the page will not update without being purged first. --Eliyak T·C 03:50, 10 February 2014 (UTC)
- Very good point. By going through various Google searches for incoming links to each of the 6 main subpages and their subpages (e.g. [28]), it seems there are somewhere between 200-500 incoming links to subpages. I was originally going to suggest using {{dated soft redirect}}, but since there are a significant number of incoming links, I would like to suggest a different approach - use {{soft redirect}}, but have the redirects "self-destruct" after a couple years by using {{timed}} in conjunction with {{sdelete}}. As far as the title issue goes, I think that could be corrected in a 2nd round as you suggest. --Eliyak T·C 00:10, 10 February 2014 (UTC)
- Err... what happened to the whole {{Translation redirect}} approach? Too complicated? -- George Orwell III (talk) 00:49, 12 February 2014 (UTC)
- I was simply unaware of ... :-) --Mpaa (talk) 07:58, 12 February 2014 (UTC)
- Mishnah is moved. Clean-up is needed as absolute links mess up transclusions.--Mpaa (talk) 11:22, 16 February 2014 (UTC)
- Clean-up of absolute links is basically done.--Mpaa (talk) 09:51, 18 February 2014 (UTC)
- Now done both.--Mpaa (talk) 22:07, 22 February 2014 (UTC)
- Clean-up of absolute links is basically done.--Mpaa (talk) 09:51, 18 February 2014 (UTC)
- Mishnah is moved. Clean-up is needed as absolute links mess up transclusions.--Mpaa (talk) 11:22, 16 February 2014 (UTC)
- I was simply unaware of ... :-) --Mpaa (talk) 07:58, 12 February 2014 (UTC)
Latest tech news from the Wikimedia technical community. Please inform other users about these changes. Not all changes will affect you. Translations are available.
Recent software changes
- The latest version of MediaWiki (1.23wmf13) was added to test wikis and MediaWiki.org on February 6. It will be added to non-Wikipedia wikis on February 11, and all Wikipedia wikis on February 13 (calendar).
- The Vector search box was changed to fix old display and accessibility issues; for example, you can now use full-text search even if you have disabled JavaScript. Please report any problems you see. The option to disable the "simplified search bar" on Vector will also be removed. [29] [30] [31] [32] [33]
- You are now notified when someone adds a link to your user page on wikis where it didn't work before (wikis with dates in the year-month-day order, including Hungarian, Japanese, Korean and some variants of Chinese). [34]
VisualEditor news
- You can now set media items' alt text and position, and directly set their size, in the media tool. [35] [36]
- The gallery tool was improved and several issues were fixed. [37] [38] [39]
Problems
- On February 3, all wikis were broken for about an hour due to a traffic balancing issue. [40]
- On February 6, some wikis were broken for about half an hour in total due to a problem with the Math extension.
Future software changes
- Some methods from Scribunto's mw.message library will be removed after February 18. If you use them in your templates or modules, please check to make sure that things will not break. [41] [42]
- You will soon be able to use GettingStarted on 23 new Wikipedias. It helps new users by listing possible tasks and giving help. The new version was also added to the English Wikipedia on February 7th. [43]
- You will soon see results from other wikis when you use the new search tool (CirrusSearch). [44] [45]
- The WikiLove tool was redesigned and should also load faster. [46]
- Edits made with WikiLove or after a GettingStarted suggestion will be tagged. [47] [48]
- It will soon no longer be possible to hide section editing links in your preferences. [49]
- You will soon be able to use the revision deletion feature via the API. [50]
- You will soon be able to choose mobile view on non-mobile devices using a Beta Feature option. [51] [52]
- If you have questions about Universal Language Selector, you can join an IRC meeting on February 12 at 17:00 UTC, in the #wikimedia-office channel on Freenode. [53]
- Developers are preparing for Google Summer of Code 2014. You can propose ideas. [54]
- bugzilla.wikimedia.org will be updated this week. You won't be able to access it from 22:00 UTC on February 12 until 01:00 UTC on February 13 at the latest. [55]
- The
<poem>
tag will be renamed to<lines>
. The old tag will still work. [56]
Tech news prepared by tech ambassadors and posted by MediaWiki message delivery • Contribute • Translate • Get help • Give feedback • Subscribe or unsubscribe.
09:30, 10 February 2014 (UTC)
The History of Wikisource including a Timeline
I often wonder about the history of wikisource. I believe it should be recorded and perhaps as The History of Wikisource including a Timeline or something about the history of wikisource as well as kept updated. We are making history here just as wikipedia and other areas are. If there is already something similar to this I would like to see it. I have been here about But wikipedia is well known whereas I do not believe wikisource is. "advertising" is part of that difference. I know of some recent changes and of people and aliases here but what about before 6 years ago when I came here from wikipedia after many articles and aliases? I think all of the history of wikisource should be recorded in some way. I am curious about it now. In the future with many changes taking place with time and technology and formats, et cetera, others will be more interested than I am now about the earliest portion of the history of wikisource. A few years ago as one of my sons was teaching his daughter tech she was surprised as was he when she told her dad that she thought "Internet has 'always existed'" She is now age 10 and advanced with much technology because my son is advanced with technology and Internet since before the days of browsers and web-pages and then Internet later for the public. I remember hearing statements that such a system existed starting with only military then for doctors seeking all sorts of cures for cancer &c and eventually where I am now. I started long ago with Computer Programing Institute and ended up attending U.Va. and joining the fledging technology committee for faculty, staff and later students only. Time passes quickly and so does technology so with all of this I think the history of wikisource should be recorded. I hope that it already has been as well as still is. William "Maury" Morris II —Maury (talk) 19:36, 11 February 2014 (UTC)
- All good points. As for a history of Wikisource, it might be better to amend and/or cleanup the Wikisource Wikipedia page. In my mind, it seems appropriate to keep the history there. The Haz talk 22:05, 11 February 2014 (UTC)
{{subpage-header}}
Namespace is not handled and when used in Translation namespace, it refers to Main ns. See e.g. Translation:Mishnah/Seder_Zeraim/Tractate_Peah/Chapter_3/4. Can some template-expert look into it and possibly add namespace support? Thanks--Mpaa (talk) 15:04, 16 February 2014 (UTC)
- I added that functionality, and tidied up while I was in there. --Eliyak T·C 07:50, 18 February 2014 (UTC)
Latest tech news from the Wikimedia technical community. Please inform other users about these changes. Not all changes will affect you. Translations are available.
Recent software changes
- The latest version of MediaWiki (1.23wmf14) was added to test wikis and MediaWiki.org on February 13. It will be added to non-Wikipedia wikis on February 18, and all Wikipedia wikis on February 20 (calendar).
- You can now use the list of active users again. [57] [58]
- The new search tool (CirrusSearch) now gives more importance to content namespaces if you search in several namespaces. [59]
- You can now directly link to files viewed with MultimediaViewer, the new tool for viewing media files. [60]
- You can read the summary of the Wikimedia technical report for January 2014. [61]
Problems
- On February 9, Wikimedia Labs was broken for about 2 hours due to an XFS file system problem. [62]
- On February 11, there were problems with VisualEditor for about 20 minutes due to a server logging issue. [63]
- On the same day, for about 20 minutes there were issues with page loading due to database problems.
- There were issues with page loading between 21:00 UTC on February 13 and 11:00 UTC on February 14 for users in Europe. It was due to a cache server problem.
- On February 14, all sites were broken for about 15 minutes for users in Southeast Asia, Oceania and the western part of North America. It was due to problems with cache servers.
VisualEditor news
- The link tool now tells you when you're linking to a disambiguation or redirect page. [64]
- You can now change image display (like thumbnail, frame and frameless) with VisualEditor. [65]
- Wikitext warnings will now hide when you remove wikitext from paragraphs you are editing. [66]
- You will soon be able to create and edit redirect pages with VisualEditor. [67] [68] [69]
08:38, 17 February 2014 (UTC)
Universal Language Selector will be enabled by default again on this wiki by 21 February 2014
On January 21 2014 the MediaWiki extension Universal Language Selector (ULS) was disabled on this wiki. A new preference was added for logged-in users to turn on ULS. This was done to prevent slow loading of pages due to ULS webfonts, a behaviour that had been observed by the Wikimedia Technical Operations team on some wikis.
We are now ready to enable ULS again. The temporary preference to enable ULS will be removed. A new checkbox has been added to the Language Panel to enable/disable font delivery. This will be unchecked by default for this wiki, but can be selected at any time by the users to enable webfonts. This is an interim solution while we improve the feature of webfonts delivery.
You can read the announcement and the development plan for more information. Apologies for writing this message only in English. Thank you. Runa 07:30, 19 February 2014 (UTC)
Discussion of preliminary proposls elsewhere
At w:User:John Carter/Opinions I have recently put together a page of some proposals which I think may be basically achievable which might help some of the various WF entities, including this one. Anyone is free to comment there, add other proposals, etc. John Carter (talk) 15:43, 21 February 2014 (UTC)
Amendment to the Terms of Use
Hello all,
Please join a discussion about a proposed amendment to the Wikimedia Terms of Use regarding undisclosed paid editing and we encourage you to voice your thoughts there. Please translate this statement if you can, and we welcome you to translate the proposed amendment and introduction. Please see the discussion on Meta Wiki for more information. Thank you! Slaporte (WMF) 22:00, 21 February 2014 (UTC)
"Broken Arrow"
Wikimedia Foundation Error {{pbr}} Our servers are currently experiencing a technical problem. This is probably temporary and should be fixed soon. Please try again in a few minutes.{{pbr}} If you report this error to the Wikimedia System Administrators, please include the details below. {{pbr}} Request: GET http://en.wikisource.org/wiki/Special:Watchlist, from 10.64.0.104 via cp1067 cp1067 ([10.64.0.104]:3128), Varnish XID 814708431 Forwarded for: 24.162.139.146, 208.80.154.133, 10.64.0.104 Error: 503, Service Unavailable at Sun, 23 Feb 2014 19:59:00 GMT
This is happening constantly. I have edited throughout it by edit, copy, wait, try, try, paste, save. This too is a hurried post. —Maury (talk) 20:04, 23 February 2014 (UTC)
- Do you have an exceptionally hefty Watchlist? Sometimes really big files just get butt ugly with editing. Are you editing it through removing items, or working with the raw list? — billinghurst sDrewth 14:23, 24 February 2014 (UTC)
- I suppose it is large by now, Billinghurst. I have removed some pages from time to time but not a lot. Would there be any harm done if I remove all of the watchlist items? There are 2 ways I know of to edit the watchlist and I suspect it doesn't matter which way is chosen. I suppose I should leave the people listed there? Very Respectfully, —Maury (talk) 15:02, 24 February 2014 (UTC)
- Actually, bill, I received the same server error yesterday (not just on Wikisource), and I have a relatively small watchlist.~ DanielTom (talk) 15:18, 24 February 2014 (UTC)
- Hefty watchlist Kaput! I deleted almost everything. —Maury (talk) 15:22, 24 February 2014 (UTC)
Latest tech news from the Wikimedia technical community. Please inform other users about these changes. Not all changes will affect you. Translations are available.
Recent software changes
- The latest version of MediaWiki (1.23wmf15) was added to test wikis and MediaWiki.org on February 20. It will be added to non-Wikipedia wikis on February 25, and all Wikipedia wikis on February 27 (calendar).
- The new search tool (CirrusSearch) was added to the Italian Wikiquote and all Wikiversity projects. Users can now enable it in their Beta options. [70] [71]
- The Universal Language Selector was enabled on all Wikimedia wikis again. You can enable web fonts in your ULS options (see how).
VisualEditor news
- You will soon be able to add and edit
__STATICREDIRECT__
,__[NO]INDEX__
and__[NO]NEWEDITSECTION__
in the page settings menu. [72] [73] [74] - You will soon be able to use the Ctrl+Alt+S or ⌘+Opt+S shortcuts to open the save window in VisualEditor. [75] [76]
- You will soon be able to preview your edit summary when checking your changes in the save window. [77] [78]
Future software changes
- You will soon be able to use the
<categorytree>
tag again. [79] - You will soon be able to post messages with the MassMessage tool in all talk namespaces. [80] [81]
- Notifications will soon work in all namespaces. [82][83][84]
Tech news prepared by tech ambassadors and posted by MediaWiki message delivery • Contribute • Translate • Get help • Give feedback • Subscribe or unsubscribe.
10:18, 24 February 2014 (UTC)
I'd appreciate some help on getting the first part of this finished...
{{Statute table/chapter}} is now documented, with the header and footer being generated automatically.
Thanks. ShakespeareFan00 (talk) 23:05, 24 February 2014 (UTC)
Non English Work - Index:ΠΟΛ 1011 2011.pdf
Looks like greek? ShakespeareFan00 (talk) 17:09, 26 February 2014 (UTC)
Using the Modern skin - a new challenge to the coders
I switched from Vector to the Modern (blue) skin to test the editor and discovered that the compactness of the layout eliminated the need to constantly scrolling vertically to access various options and controls during editing. However, (there is always a however), the skin has a graphic display anomaly seen here and this anomaly is exists in all the browsers and OS's I tested and compared to the Vector skin. The image description lists the browsers and OS combinations, with the exception of Internet Explorer. Can this be corrected? — Ineuw talk 08:23, 28 February 2014 (UTC)
- Assuming the problem was the bottom rather than middle alignment of certain line segments generated by the custom rule template scheme, a tweak to the File: string in the custom rule segment template forcing alignment back to middle over the skin's default(s) should have fixed that for you. Report back either way. -- George Orwell III (talk) 03:41, 1 March 2014 (UTC)
- It works. It's a perfect GO3 solution as usual. Thanks.— Ineuw talk 18:50, 1 March 2014 (UTC)
A note to those who are having problems with HotCat
The working solution is to:
- Empty all code from the .js pages
- Reset Preferences to their default and Save
- Activate (check) HotCat, Save and test.
- Repaste code, Save and check HotCat.
- Reselect Preferences Save & check HotCat.
- Reselect needed Gadgets but keep checking HotCat function after each selection.— Ineuw talk 09:51, 28 February 2014 (UTC)
Pipes replaced with {{!}}
When I built a wikitable, the pipes were all replaced with pipe templates (diff). Is this the intended behavior? Heyzeuss (talk) 06:36, 1 March 2014 (UTC)
- Yes. If you want to pass a table as an argument to a template, you have to "protect" the pipes so that the template doesn't try to use them to split on arguments. That's what {{!}} does. On index pages, whatever you put into the text fields ends up being used as arguments to a template. But there is no expectation that you would know that, so the index page php parses your input and "protects" your pipes for you. Hesperian 07:24, 1 March 2014 (UTC)