User talk:Nageh

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Nageh (talk | contribs) at 13:40, 21 January 2012 (→‎Computer hardware). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Classical block codes are usually implemented using hard-decision algorithms

Hi Nageh, I think the reference you added clarifies things. I was thinking about block codes in general, but it's true that your statement was about classical block codes, for which I agree that hard-decision was common. My mistake. Itusg15q4user (talk) 15:37, 9 October 2009 (UTC)[reply]

Hi- I've gone back and copy-edited the article to change all "analog" to "analogues", including links that I had not originally edited. However, I didn't change Template:Modulation techniques or Template:Audio broadcasting, which are also used in the article and use the American spelling. The WikiCleaner just changes the redirects back to the current page names, which in this case, happens to be the American spelling version at the moment. Since both spelling versions were previously used in the article, I really couldn't tell that the British spelling would be the dominant version, since my perspective is from the American side. I've also gone back and changed the spelt-out acronyms to have capital letters, including links that were not originally edited by me, so that they match capitalizations throughout the article. Also, I defined a few acronyms, so that non-technical readers will know what they mean.

Could use your help with a few of the links needing disambiguation, (eg., carrier-to-noise ... threshold [disambiguation needed] and quadrature [disambiguation needed] ... -mixed), thanks. --Funandtrvl (talk) 21:31, 18 November 2009 (UTC)[reply]

Thanks again for fixing the disambigs! --Funandtrvl (talk) 20:15, 19 November 2009 (UTC)[reply]

Concatenated error correction codes

The page is still being worked on. There may be minor errors, but on the whole the text you saw was an attempt to make the article more accurate and readable.

My technical background for this is adequate

A lot of the articles on DVB / ATSC and Compact Disc / DVD error correction are in paper form, so often you have to go via what you remember. DVB ECC and ATSC ECC are not the same, so I had to hedge my text a little. DVB-T2 is a totally different can of worms vs DVB-T, so even differentiating between these formats ECC is a minefield.

I am trying to fix and improve this at the moment.

Minor misreadings you made

  • similar to NICAM? Why do you refer to an audio compression standard? Interleaving is a standard technique in error-correction coding, and your reference is totally misplaced! The text applies to the standardized CCSDS randomizer ... that is practically a twin to the NICAM randomizer NICAM and CCSDS interleavers do have a lot in common too, but only the short ones...
  • For example, Voyager initially used concatenated convolutional with Golay codes for the planetary missions, the Golay Code allowed the images to be sent 3x faster than RSV, but after the primary mission was over the RSV code was made mandatory -- however finding the proper papers and articles to cite for this is hard
  • And DVB-T does very well use code concatenation with RS codes. News to me, as I was not 100% certain, however -- DVB-T2 does not and DVB-S2 may not either. DVB vs DVB2 are different creatures when you ignore the standardized video resolution layers.

PS: ODFM is a proposed CCSDS transmission format!

Eyreland (talk) 08:40, 6 February 2010 (UTC)[reply]

Voyager I upgrades, Voyager Program Papers

The Voyager 2 probe additionally supported an implementation of a Reed-Solomon code: the concatenated Reed-Solomon-Viterbi (RSV) code allowed for very powerful error correction, and enabled the spacecraft's extended journey to Uranus and Neptune.

Yes, true (a lot of what you have put there is news to me as I was never able to get all these details) -- but Voyager I must have had an upgrade of its ECC system once the Cameras were turned off. Where the reference is to this activity in the Voyager Mission Reports alludes me. I don't have any date to go by to prove when the VI EEC upgrade happened. However, the VI ECC upgrade must have happened ... it is the tyranny of the link margin so to speak. Also, identical (but separate) coding for uplink / downlink is cheaper too...

Can you restore the section on CC-ECC with less math

Can you restore the section on CC-ECC with the less mathematical explanation.

That section had nothing to do with misinformation at all, it was at best a guide for those less mathematically inclined.

You must remember that most of the US population (and UK here too etc...) is not that mathematically inclined and would not be helped by the pure math section of Concatenated error correction codes.

I speak from experience, as I am involved with

Getting information on how to decode Voyager Program packets or signals is very difficult as it is a paper era mid-Cold War science programme.

However, understanding the ECC concatenation is equally hard.

This intellectual difficulty should not be imposed on the general public that funded the missions that made these ECC coding standards so obligatory.

Mathematical and Engineering illiteracy hurts mathematicians and engineers right in the pocketbook - so avoid actions that lead to greater levels of redundancy in this profession. If this lot is not getting paid, everyone else's salary is at risk.

Eyreland (talk) 12:25, 6 February 2010 (UTC)[reply]

Hi - I've added the rollback falg. Please review WP:ROLLBACK or ask if you need any help. Pedro :  Chat  20:55, 15 March 2010 (UTC)[reply]

Thanks! Nageh (talk) 21:03, 15 March 2010 (UTC)[reply]

Hi Nageh, I've removed your report from AIV; it's a little too complicated for AIV, where the emphasis is generally on dealing with simple and obvious cases quickly. Your report is better suited for WP:ANI; you need an admin who (a) can spend a little more time looking into this, and (b) knows how to do a rangeblock. I fit (a), but not (b), or I'd do it myself. I suspect if you make that report at WP:ANI, someone will come along who can help. From a review of a few of those IP's, this looks like a reasonable suggestion, except from my limited understanding of IP ranges, I think the range you recommend might be pretty big. --Floquenbeam (talk) 14:21, 18 March 2010 (UTC)[reply]

Hmm ... (Reed Solomon)

Hi Glrx, and thanks for your efforts on Reed-Solomon codes. However, I want to point out that you removed a concise description of how RS codes essentially work, namely by oversampling a polynomial. Even though that statement could have been expanded, it was clear.

I disagree that it was clear. Although RS arrived at their code from a an oversampled polynomial viewpoint, that statement is not clear but rather terse. Furthermore, the oversampled view doesn't comport with modern usage. The modern g(x) viewpoint makes s(x) disappear and lets the error correction focus on just n-k syndromes rather than interpolating polynomials. I reworked the introduction to follow the RS development after your comment, and now I'm unhappy with it -- it lead me into the same trap that I was trying to fix: describing stuff that distracts. I fell into restating the history. The goal should be to explain the code and give insight into how it works. The modern implementation is the BCH viewpoint and transmits coefficients and not values.Glrx (talk) 21:19, 28 March 2010 (UTC)[reply]

The text you added describes RS codes from the point of cyclic codes. Furthermore, what you essentially say is that an error can be detected if the received code word is not divisible by the generator polynomial, which is... trivial from a coding point of view, but does not provide the casual reader with any insight. Furthermore you lead the reader to believe in a tight connection with CRC codes, while the actual connection is with cyclic codes.

I mentioned the CRC processing to build an analogy. I deleted it, and now I'm sorry I did. It also gives context for error correction algorithm using the roots of g(x).Glrx (talk) 21:19, 28 March 2010 (UTC)[reply]

Last but not least, it is actually true that RS codes were not implemented in the early 1960s because of their complexity—it _might_ have been possible to actually implement on some hardware, but nobody did it back then. As far as history tells, RS codes were not implemented until Berlekamp came up with his efficient decoding algorithm together with Massey, after which they were implemented in the Voyager 2 space probe.

I don't understand this comment at all. I deleted a clause that claimed the digital hardware was not advanced enough at the time and left the clause about no practical decoder. The reason the codes were not implemented is because the decoding algorithm was impractical (even on modern hardware) for a large number of errors. If there were a practical decoding algorithm in 1960, there was hardware to do it. Your statement agrees with that assessment, so what does it want? Does it want to keep the inadequate digital technology clause because it may have been possible to implement impractical algorithms in 1960 hardware?Glrx (talk) 21:19, 28 March 2010 (UTC)[reply]


To summarize, the description that you have given is better placed at cyclic codes, and mathematical descriptions, if added, are better placed in the Mathematical formulation section. Cheers, and keep up the work! Nageh (talk) 18:19, 28 March 2010 (UTC)[reply]

RS article in general

My general take is the article is broken in many ways. The original view is a bit interesting from the historical standpoint, but it is irrelevant to how RS codes are used today. The modern decoder interpolates the connection polynomial, but that is not the interpolation that RS described. Even if the Gegenbauer polynomial comment is correct, it is a pointless Easter egg. The Fourier transform description of how decoding works is a pointless, unreferenced, fable that takes the reader out of algebra and into signal processing point of view -- only to switch back to algebra at the last minute because the signal processing view doesn't really help with decoding or understanding. RS used the Huffman D transform (which we could call a z-Transform), but the insight is really for the formal power series manipulations.

Yes, the intro and the article need more work. I see moving some other sections up, but most moves also imply rework to accomodate the move. In the intro, RS's original m = today's k. I'll be doing more edits, but you can edit, too.Glrx (talk) 15:41, 31 March 2010 (UTC)[reply]

Rewrite / intro

You've put in a lot of work on the whole article. I appreciate how difficult and time consuming that is.

Actually I am just trying to clean up the mess that was in there, just as you do. :)

I took out the comment about burst errors the first time because it isn't an essential idea of an RS code. The RS code doesn't know it's correcting a burst error.

Lin & Costello is a very notable reference, and it explicitly points out the RS code's ability to correct both random and burst errors. It discusses this from a historical point of view, starting with single and then multiple-burst error detecting/correcting codes (yes, Fire codes are one of them), and then conclude with Reed-Solomon codes, which they characterize as "burst-and-random-error-correcting codes" when viewed over bit streams.
I have also added a footnote citing an important application of RS codes as both random and burst error correcting, namely in concatenation with convolutional codes.
Burst error correction is not an essential idea of RS codes, but a notable property. Whether it is notable enough to mention it in the introduction is arguable, and I won't complain if you want it moved to the article body. But note that my intention was simply to characterize them as burst-and-random-error-correcting codes.

Now your introduction is claiming that RS is a nonbinary code, but then it flips around and starts talking about a binary representation.

Please be fair and don't misinterpret what I write. An RS code is a non-binary code because it works over symbols of any finite field. The binary representation refers to the fact that any practical code is constructed over finite fields of characteristic 2, i.e., of size 2^m. That means that each symbol can be represented by an bit string of length m. No mystery here. I am surprised that anybody can misunderstand this.

That introduces confusion, so it is not a good idea. Mentioning that RS codes are used in disk drives would be fine; explaining why RS codes are useful for burst errors requires understanding too much detail.

Which is funny because that is the reason I have given to you for moving a lot of your introductory text to the article body.

To view it another way, in the original paper, R and S discussed both random errors and erasures. They discussed mapping the symbols onto a binary alphabet. They did not discuss burst errors, so burst errors were not an issue in the design of RS codes.

The lead section is not about their original paper, but about an introduction to RS codes in general.
And BTW, there is a big difference between mapping onto and mapping over a binary alphabet. RS codes map symbols over a binary alphabet, or more formally, onto a finite field of characteristic 2 (this is the binary aspect), which means symbol sizes 2^m.
And in regards to erasures, the reason I removed them is because they are not so special. Erasure are located errors, and any error-correcting code can correct erasures. In fact, any MDS code with t check symbols can correct t erasures.
And just reading your edit summary, what means "NB in 2D barcodes, not BEC"? I don't understand NB, but BEC is binary erasure channel. Please not that any erasure channel is equivalent to the binary erasure channel.

There are two versions classic RS encodings. If the message is P(x), then one can sent P(x) g(x) or one can send the systematic P(x) x^{n-k} + remainder. Either version sends a polynomial evenly divisible by g(x). The more common version (and the one described further down in the article) is the latter systematic version.

Which I explained in the article, right? The systematic method just reconstructs the generator polynomial, that doesn't change how en- and decoding works otherwise.

I'm watching this page, so add replies here and I will see them.Glrx (talk) 21:00, 2 April 2010 (UTC)[reply]

Done. Nageh (talk) 21:37, 2 April 2010 (UTC)[reply]
PS: I have tried to merge both our views into the lead section. I hope you can agree. Otherwise, let's discuss. :) Nageh (talk) 22:03, 2 April 2010 (UTC)[reply]

Edit collision on Rewrite / intro

You've put in a lot of work on the whole article. I appreciate how difficult and time consuming that is.

Actually I am just trying to clean up the mess that was in there, just as you do. :)
and I made a bigger mess as I was doing it.

I took out the comment about burst errors the first time because it isn't an essential idea of an RS code. The RS code doesn't know it's correcting a burst error.

Lin & Costello is a very notable reference, and it explicitly points out the RS code's ability to correct both random and burst errors. It discusses this from a historical point of view, starting with single and then multiple-burst error detecting/correcting codes (yes, Fire codes are one of them), and then conclude with Reed-Solomon codes, which they characterize as "burst-and-random-error-correcting codes" when viewed over bit streams.
I have also added a footnote citing an important application of RS codes as both random and burst error correcting, namely in concatenation with convolutional codes.
Burst error correction is not an essential idea of RS codes, but a notable property. Whether it is notable enough to mention it in the introduction is arguable, and I won't complain if you want it moved to the article body. But note that my intention was simply to characterize them as burst-and-random-error-correcting codes.
I think we agree. Burst error should get prominence in either properies (which should move up) or applications. In the body interleaving can be mentioned.

Now your introduction is claiming that RS is a nonbinary code, but then it flips around and starts talking about a binary representation.

Please be fair and don't misinterpret what I write. An RS code is a non-binary code because it works over symbols of any finite field. The binary representation refers to the fact that any practical code is constructed over finite fields of characteristic 2, i.e., of size 2^m. That means that each symbol can be represented by an bit string of length m. No mystery here. I am surprised that anybody can misunderstand this.
I'm not disputing the theory but rather the presentation. Casual reader is told nonbinary in one sentence and binary in another. The audience is need not be versed in coding theory.

That introduces confusion, so it is not a good idea. Mentioning that RS codes are used in disk drives would be fine; explaining why RS codes are useful for burst errors requires understanding too much detail.

Which is funny because that is the reason I have given to you for moving a lot of your introductory text to the article body.
No dispute there. My intro also got tangled in detail.

To view it another way, in the original paper, R and S discussed both random errors and erasures. They discussed mapping the symbols onto a binary alphabet. They did not discuss burst errors, so burst errors were not an issue in the design of RS codes.

The lead section is not about their original paper, but about an introduction to RS codes in general.
And BTW, there is a big difference between mapping onto and mapping over a binary alphabet. RS codes map symbols over a binary alphabet, or more formally, onto a finite field of characteristic 2 (this is the binary aspect), which means symbol sizes 2^m.
mea culpa informal. I used the notion as a restricted alphabet used to build words. R&S used translation of K into a binary alphabet.
And in regards to erasures, the reason I removed them is because they are nothing special. Erasure are located errors, and any error-correcting code can correct erasures. And just reading your edit summary, what means "NB in 2D barcodes, not BEC"? I don't understand NB, but BEC is binary erasure channel. Please not that any erasure channel is equivalent to the binary erasure channel.
NB = important. Yes, any error correcting code can correct erasures, but that buries RS corrects twice as many erasures as errors. I'm looking at the Wikipedia reader and not someone who knows all the implications of MDS.

There are two versions classic RS encodings. If the message is P(x), then one can sent P(x) g(x) or one can send the systematic P(x) x^{n-k} + remainder. Either version sends a polynomial evenly divisible by g(x). The more common version (and the one described further down in the article) is the latter systematic version.

Which I explained in the article, right? The systematic method just reconstructs the generator polynomial, that doesn't change how en- and decoding works otherwise.
I wasn't commenting about theory but rather presentation that can confuse a reader. If the reader sees an RS encoding is P(x)g(x) and then sees that an RS encoding is something different further down will confuse him. It's OK to simplify some things.

I'm watching this page, so add replies here and I will see them.Glrx (talk) 21:00, 2 April 2010 (UTC)[reply]

Done. Nageh (talk) 21:37, 2 April 2010 (UTC)[reply]
gotta go. Thanks.Glrx (talk) 22:18, 2 April 2010 (UTC)[reply]

Unindent

(Unindent) Turns out we mostly agree. Yes, the article has multiple issues, but correcting them takes time. (Lots of time.) I am aware that some parts I started working on (section Classic view) are left unfinished. I may continue when I get to it, but no promise there (feel free to work on it).

I understand and agree that I may not assume knowledge on coding theoretic concepts from the reader. It might just take me a while sometimes to get the text right, which means rewriting by myself or by some other person several times. :)

cheers, Nageh (talk) 22:44, 2 April 2010 (UTC)[reply]

Yes, it takes enormous amounts of time. I was very impressed with the time you put into your edits. I still think t erasures is important in the introduction and shouldn't be buried, but we've both got other things to do.Glrx (talk) 23:34, 2 April 2010 (UTC)[reply]

Berlekamp Massey decoder

The reversion of the basic description of the decoder restored what I believe to ba an incorrect description of the LFSR. Also, I've only seen one implementation of Berlekamp Massey and many implementations of Euclid in the computer peripherals I've worked on in the past, but your experience differs. Both are popular methods. Euclid is much easier and less complex to implement. (update it's simpler to implement, I don't know about gate counts though). I noted this in the talk page: Talk_Berlekamp_Massey_decoder Rcgldr (talk) 03:53, 20 October 2011 (UTC)[reply]

I updated the brief description in the RS article to match the description and example code in the Berlekamp Massey artcile.

Rcgldr (talk) 03:53, 20 October 2011 (UTC)[reply]

Euclidean decoder

I added a section for the Euclidean decoder on the discussion (not the article) page.
Talk:Euclidean_Division_Algorithm_Decoder
I created a program for GF(929) (first time I had to deal with add and subtract instead of xor). It's working but one puzzlement is that when converting the final remainder to omega(x), I have to negate it if the number of errors is odd. If intersted, I can post a zip of the source and program on my web site.
Rcgldr (talk) 00:47, 15 October 2011 (UTC)[reply]

Added a brief section Euclidean_decoder to main article. Rcgldr (talk) 08:56, 15 October 2011 (UTC)[reply]

Hardware Inversion via sub-field mapping

In case you're interested or curious, here is link to an old Word 6.0 / 95 document: m8to44.doc . All of this can be implemented in hardware, including circuits for the 8 bit mapping, the GF(162) math: a + b, a × b, 1 / a, a2, and mapping back to the 8 bit field, as a series of stages of about 340 gates, with a single progation delay. Rcgldr (talk) 09:48, 26 October 2011 (UTC)[reply]

Thanks, I might take a look. Nageh (talk) 09:49, 26 October 2011 (UTC)[reply]

AfD closing of Susan Scholz

Excuse me, but your AfD closing of Wikipedia:Articles_for_deletion/Susan_Scholz was premature. It should have entered a second round, as it does not conform to the wikipedia policy guidelines of notability. I would appreciate if you would reopen the AfD debate. Thanks, Nageh (talk) 10:33, 7 April 2010 (UTC)[reply]

There is nothing in the deletion policy to support a relisting (see WP:RELIST). The question of whether the article meets notability criteria is a question of fact to be established by consensus on the deletion discussion page. There was no such consensus. Stifle (talk) 10:37, 7 April 2010 (UTC)[reply]
No there is not, but proper reaction is something that might be expected from an admin deciding on how to proceed on an AfD debate. To me it was obvious that the article was in need of further discussion, and a proper reaction would be to relist the discussion in order to try coming to a consensus. You also ignored an ongoing discussion on her claimed notability according to WP:PROF claim #1. I would again appreciate if would reconsider your actions. Thanks, Nageh (talk) 10:42, 7 April 2010 (UTC)[reply]
No there is not, but proper reaction is something that might be expected from an admin deciding on how to proceed on an AfD debate. To me it was obvious that the article was in need of further discussion, and a proper reaction would be to relist the discussion in order to try coming to a consensus. You also ignored an ongoing discussion on her claimed notability according to WP:PROF claim #1. I would again appreciate if would reconsider your actions. Thanks, Nageh (talk) 10:42, 7 April 2010 (UTC)[reply]
Admins are bound to operate in accordance with policies and guidelines, and are not entitled to make up new ones on the fly. I'm happy with my no-consensus closure and you're welcome to open a deletion review if you feel it was not correct. Stifle (talk) 10:45, 7 April 2010 (UTC)[reply]

Thank you

Nageh, thank you for your letter. I appreciate the time and thought you gave to what a quick browsing shows to be substantive and constructive writing. After I give it serious study, is this the proper place to reply to you again? Vejlefjord (talk) 21:11, 8 April 2010 (UTC)[reply]

Thank you for the appreciation. Yes, this place is fine for your further comments. Nageh (talk) 21:33, 8 April 2010 (UTC)[reply]

Nageh, I am looking at your 10 Dec 2010 post on my talk in which you wrote “Vejlefjord, I apologize, I did not find enough time yet to go through your new version of the User:Nageh/Theodicy and the Bible (second draft) article. I think I will find more time in the upcoming Christmas holidays.” I hope you can find time and have not given up on the project. Thank you and all the best. Vejlefjord (talk) 22:46, 1 April 2011 (UTC)[reply]

Your words are reencouragement to review your latest draft. I will keep it on my to-do list. Nageh (talk) 07:45, 2 April 2011 (UTC)[reply]

Nageh, I hope you can find just a little time to assess User:Nageh/Theodicy and the Bible (second draft). Just a quick check to see if you think it’s OK. If you think it is OK, should we “go live” or follow your earlier suggestion to get someone in the Christianity/Assessment Project to assess it first? Vejlefjord (talk) —Preceding undated comment added 20:52, 14 August 2011 (UTC).[reply]

I feel really sorry I still haven't gotten to review your draft as promised. Unfortunately, I am very busy these days and while I do spend some (too much) time on other stuff on Wikipedia this task is a bigger one I have postponed so far. I won't have much time available anytime soon either so you might try contacting some person on the Christianity project, and if actual work will be going on I might still be able to jump in. Sorry I'm not of more help for the moment. Best, Nageh (talk) 22:06, 14 August 2011 (UTC)[reply]

MathJax in Wikipedia

Thank you so much for sharing this. Now let's think about the following.

  • We need an easy way to update to the most recent version. For this, we should prepare a patch for MathJax that enables us to control from the config file the changes you made. This way, as few changes as possible are required in the actual source code.
  • We need to hunt bugs. The most common reason why a page wouldn't render fully seems to be due to a bug of mediawiki. In my sandbox you can see that ':' and the math tags do not understand each other (with 'display as latex' turned on), which makes it impossible for MathJax to match the opening and closing dollar.
  • Any ideas on how to provide the webfonts on wikipedia so that clients don't need to install anything?
  • We should inform the developers of MathJax and mediawiki.

What do you think? ylloh (talk) 14:05, 13 April 2010 (UTC)[reply]

I'm busy for the rest of the week, but on the week-end I'll try to contact the mediawiki developers about this. ylloh (talk) 09:50, 14 April 2010 (UTC)[reply]

Thanks. Btw, while the number of files for the image fonts is enormous, there are only 21 files for the svg fonts. Btw2, the fonts have received an update on the mathjax svn yesterday. ylloh (talk) 13:45, 14 April 2010 (UTC)[reply]

Wow! That's fast! ylloh (talk) 09:27, 15 April 2010 (UTC)[reply]

Hi, I'm a normal Wiki user just trying to use you MathJax plugin. I added importScript('User:Nageh/mathJax.js'); in my User:Netheril96/vector.js, installed the MathJax web fonts, chose "Leave as TeX" option, Shift-Reload and restarted FireFox (during which I saw a notice on loading some mathjax .js files) but it didn't work. It's just plain TeX. By the way, why the user ylloh kept talking to himself?--Netheril96 (talk) 01:50, 26 October 2010 (UTC)[reply]

I did not talk to myself ;). Do you have a recent version of firefox? I think you have to install the MathJax fonts locally, which MathJax normally does not require, but which is necessary as the fonts have not been uploaded to wikipedia. ylloh (talk) 08:36, 26 October 2010 (UTC)[reply]

So why the only signature above my first comment is yours? I installed both STIX and the fonts Nageh said (See Help talk:Displaying a formula#Formulas as SVG?) but nothing happened yet. And do you know what is the fundamentals of his mathjax.js? Is it just a copy of MathJax's main javascript file?--Netheril96 (talk) 11:01, 26 October 2010 (UTC)[reply]

I just updated my STIX fonts to the most recent version. They work, but not on wikipedia with this script. The version of MathJax that is uploaded to wikipedia is old and does not seem to be compatible with the most recent fonts, so rendering does not work. MathJax looks so great in a wiki (e.g. in DokuWiki), and I would really like to see wikipedia use it by default. ylloh (talk) 08:55, 26 October 2010 (UTC)[reply]

Hi Netheril96, thanks for your interest. MathJax support is currently broken as I'm waiting for a few bugs I got fixed in MediaWiki more than 6 months ago to become effective on Wikipedia. If anyone knows which people to address to update the mediawiki software on the English wikipedia, please do so.
The script I have uploaded is slightly modified from its original to consider the different directory layout, different settings, and some markup issues. As soon as we have a recent mediawiki version I will continue working on it, but currently there is no point.
Nageh (talk) 11:28, 26 October 2010 (UTC)[reply]
Just to keep interested users updated. The fixes have been applied to the Wikipedia branch, so I ported the current MathJax 1.0 and extended it to respect the additional texvc commands. It should be working just fine now, except it may be a bit slow on maths heavy pages. Have fun, Nageh (talk) 13:13, 13 November 2010 (UTC)[reply]

I want to point out that Firefox+Greasemonkey can be used to render math on wikipedia using the MathJax CDN. That works fine for as long as wikipedia does not support this natively. ylloh (talk) 21:19, 27 September 2011 (UTC)[reply]

Why not use User:Nageh/mathJax my script? It distinguishes between display and inline math, provides texvc functionality that is not part of (La)TeX, and implements other improvements for Wikipedia support. Nageh (talk) 21:57, 27 September 2011 (UTC)[reply]
Yes, your script is clearly better. Thanks again! ylloh (talk) 20:37, 28 September 2011 (UTC)[reply]

Improvements

Maybe it could have something like

if ( someCheckOnMathPreference ) jsMsg( 'For using MathJax, you need to set math rendering to "Leave it as TeX" in your preferences' );

so that when a user try to use it without setting the preference it would be informed about this. Helder 16:44, 20 November 2010 (UTC) —Preceding unsigned comment added by Helder.wiki (talkcontribs)

I will consider it. Thanks for the interest! Nageh (talk) 20:09, 20 November 2010 (UTC)[reply]
You're welcome. =)
Do you know if currently it is loaded in every page or just on those which have some element with class="tex"? If it doesn't, this could be another improvement in speed when not viewing pages with math. Helder 01:02, 22 November 2010 (UTC)
Implemented in new version. Documentation here. Nageh (talk) 20:40, 22 March 2011 (UTC)[reply]

I have marked you as a reviewer

I have added the "reviewers" property to your user account. This property is related to the Pending changes system that is currently being tried. This system loosens page protection by allowing anonymous users to make "pending" changes which don't become "live" until they're "reviewed". However, logged-in users always see the very latest version of each page with no delay. A good explanation of the system is given in this image. The system is only being used for pages that would otherwise be protected from editing.

If there are "pending" (unreviewed) edits for a page, they will be apparent in a page's history screen; you do not have to go looking for them. There is, however, a list of all articles with changes awaiting review at Special:OldReviewedPages. Because there are so few pages in the trial so far, the latter list is almost always empty. The list of all pages in the pending review system is at Special:StablePages.

To use the system, you can simply edit the page as you normally would, but you should also mark the latest revision as "reviewed" if you have looked at it to ensure it isn't problematic. Edits should generally be accepted if you wouldn't undo them in normal editing: they don't have obvious vandalism, personal attacks, etc. If an edit is problematic, you can fix it by editing or undoing it, just like normal. You are permitted to mark your own changes as reviewed.

The "reviewers" property does not obligate you to do any additional work, and if you like you can simply ignore it. The expectation is that many users will have this property, so that they can review pending revisions in the course of normal editing. However, if you explicitly want to decline the "reviewer" property, you may ask any administrator to remove it for you at any time. — Carl (CBM · talk) 12:33, 18 June 2010 (UTC) — Carl (CBM · talk) 13:20, 18 June 2010 (UTC)[reply]

RfD

I went ahead and RfDed pre coding. A1 was not the correct speedy tag, and because of its age the correct venue is RfD. I made an entry here. Thanks for the follow up. NativeForeigner Talk/Contribs 19:21, 22 June 2010 (UTC)[reply]


General Number Field Sieve

You recently undid my revision 377392928 on the general number field sieve because it does not comply with Wikipedia's definition of L. I just looked at my "Prime Numbers: A Computational Perspective" book and also my "Development of the Number Field Sieve" book, and they use the same notation I use, so this suggest that the problem was not with my general number field sieve correction, but instead with Wikipedia's L definition. I'm also fairly confident that Lenstra and Pomerance were the ones who standardized this notation, so Wikipedia's big O around the front is non-standard, and should be fixed. Let me know if you concur. —Preceding unsigned comment added by Scott contini (talkcontribs) 23:42, 8 August 2010 (UTC)[reply]

UPDATE: You're right that Handbook of Applied Cryptography uses the big O on page 60, Example 2.6.1. and this is not listed as an error on HAC errata web page. I maintain: (i) The Big O on the outside has no effect and does not belong there -- so this can be interpreted as an error which has not yet been reported to the HAC authors, (ii) I think the notation came from either Lenstra or Pomerance, and they do not use Big O. A few sources: pg 358 of the Encyclopedia of cryptography and security (article written by Arjen Lenstra), Any article in the Development of the Number Field Sieve book, and Crandall and Pomerance's book. I'm also going to email Arjen Lenstra on this. More news later. Scott contini (talk) 00:57, 10 August 2010 (UTC)[reply]

Some history: Prior to the number field sieve, the L-notation had only one parameter, because the was always 1/2 for the algorithms they were interested in. I don't yet know when the original L-notation was introduced, but Pomerance used it in hios 1982 seminal paper "Analysis and Comparison of some Integer Factoring Algorithms" which can be downloaded from his website. Pomerance did not use any big-O, only little o, and he explains properties of the little o that make it evident that big O is not needed in section 2 (although he does not explicitly say this, it is implied). When the number field sieve was invented, they no longer had the 1/2 in the exponent so they then added the second parameter to the notation so as to include all subexponential type algorithms. This was combined analysis by several people including Pomerance and H. Lenstra. It's all in the Development of the Number Field Sieve book. I'm still awaiting a reply from Arjen Lenstra, but based upon this I can pretty confidently say that Handbook of Applied Cryptography (HAC) and Wikipedia are using non-standard notation, and the big O can be eliminated. Let me know your thoughts. Scott contini (talk) 03:20, 10 August 2010 (UTC)[reply]

Well, without the Big O it would actually equal Big Theta (Θ). Since the complexity usually refers to the worst case, you'd actually have to use the Big O outside to describe the running time of the algorithm. So either way you'd say it is O(L(...)) or simply L(...), depending on which definition of L you use. Note that the small o inside of L does not take care of that. Nageh (talk) 08:45, 10 August 2010 (UTC)[reply]
Sorry, I have to disagree with you. I also got my reply from Arjen Lenstra which agrees with me. He writes "But O is nonsense if o(1) in exponent". Feel free to email me at scott_contini at yahoo and I will forward his reply to you. But really, the argument is very simple. means there is a constant such that the function asymptotically converges to no more than . Now replace with . Observe:
because is (e.g. is asymptotically negligible in comparison to -- the former is a constant, the latter is not). Scott contini (talk) 12:35, 10 August 2010 (UTC)[reply]
You misunderstood me. It is clear that the small o takes care of multiplicative, exponential, and additive constants. What it does not take care of are functions with lower (different) complexity, i.e., smaller α. For example, a polynomial (in ln(n)) or a constant is not in the set of functions complying to L with any α>0, assuming the definition without the Big O. However, it is in O(L(...)). So while o(1) will ultimately converge to 0, for any c and α>0 the function is nonetheless a superpolynomial function, and L thus a set of such superpolynomial functions. Then, the Big O says nothing but the worst case complexity is ultimately superpolynomial (but nothing about e.g. average case complexity). And this is where I see the point: the L notation actually refers to the expected complexity as n tends to infinity (and thus the average complexity). And for this you truly don't need a Big O outside. (And thus the definition of L without O becomes more reasonable.)
Irrespectively, I would very well like to see Lenstra's response as well. I'll send you an email. Thanks! Nageh (talk) 13:47, 10 August 2010 (UTC)[reply]
I see your point now. However, (i) the standard notation does not use big O, and (ii) the running times for quadratic sieve, number field sieve, and all of these combination of congruence algorithms are the actual running time -- not upper bounds. That is, the running time of the number field sieve is indeed (for the defined in the algorithm):
So, putting the big O on the outside of this is giving the impression that it is an upper bound when in fact that is the running time for every suitable number that is fed in. In general, if one wants to indicate that it is an upper bound rather than proved (under certain assumptions that NFS sieved numbers/norms have smoothness probabilities similar to randomly chosen numbers of the same size) running time, then they can add the big O on the outside. That is, they can say . But if you define the L-notation to have the big O on the outside, then algorithms that have that as their actual run time (not upper bound) are not able to indicate that. Such an example is the number field sieve. So in addition to points (i) and (ii), I add that (iii) the standard definition (which is not the Wikipedia/HAC definition) is more useful. I'm making the same argument on Talk:L-notation. Scott contini (talk) 23:56, 10 August 2010 (UTC)[reply]
Scott, we're concluding along the same lines. I totally agree with you here, as this is what I tried to say in my previous comment. The L in the GNFS refers to the expected complexity, and thus not to an upper bound, so the Big O is inappropriate. From this point of view I also agree that it makes more sense to define the L without the Big O outside, and use O(L(...)) when you truly want to refer to an upper bound.
What I would like to see now are suitable references, both for personal interest and for including at the L notation Wikipedia article. Nageh (talk) 07:45, 11 August 2010 (UTC)[reply]
Thanks. Glad we are in agreement. Scott contini (talk) 12:00, 11 August 2010 (UTC)[reply]

Vejlefjord: report

Nageh, I gave your (April 2010) helpfully specific critique and guidelines re my first Wikipedia try with “Theodicy and the Bible” the serious study it deserved. Motivated by your response (along with Moonriddengirl's interest), I have done a major rewriting that is posted on http://en.wikipedia.org/wiki/User:Vejlefjord with the title “Major rewriting of ‘Theodicy and the Bible’.” Would you be so kind as to look at the rewrite and tell me whether you consider “the way in which it is presented” (to use your words) Wiki-OK? Trying to write Wiki-style is different (and more difficult for me) than my experience with books and journal articles. Thanks, Vejlefjord Vejlefjord (talk) 02:29, 15 August 2010 (UTC)[reply]

Thanks for your invitation to review, I hope to get to it within the next (several) days. From a first quick analysis, the article seems more accessible now, but there are still a couple of issues left, and in part the presentation got a bit bullet-style. Anyway, I intend to come up with some concrete suggestions for further improving the article after reviewing. Nageh (talk) 09:26, 15 August 2010 (UTC)[reply]
I moved the stuff a bit around. The article is now here. I'm not really active on Wikipedia, so your help is much needed. I think the article has potential and it would be a shame if it couldn't be used. Vesal (talk) 12:24, 21 August 2010 (UTC)[reply]
Thanks for the notice and the initial effort. Right, I absolutely intend to get it back into article space at some point. I'm a little bit restricted time wise as well currently, but I will see what I can do. Nageh (talk) 17:06, 21 August 2010 (UTC)

Vesal and Nageh re your posts on User talk:Nageh which I appreciate. I’ll try to do better and get a response to Vesal’s edit ASAP — his edits evoked more issues than he probably expected. You both seem to be busy, so if either of you will point out specific changes you think needed, I will work on them (or tell you if I have questions them). I know the article no longer belongs to me, but using theological language appropriately can be difficult, and it may be that, after a lifetime working at it, I am better equipped to do rewriting. Vejlefjord (talk) 22:49, 30 August 2010 (UTC)[reply]

Vejlefjord, I hope to get some time next week or so for this. Anyway, I'll follow the article. Nageh (talk) 23:56, 31 August 2010 (UTC)[reply]
Nageh, in your 15 August 2010 comments re “Theodicy and the Bible - rewrite,” on the plus side you say that “the article seems more accessible now.” On the negative side you say that “there are still a couple of issues left, and in part the presentation got a bit bullet-style.” I look forward to receiving your “concrete suggestions for further improving the article” that you said would be forthcoming whenever you can find time.
Vesal seems to have decided that I am hopeless as a Wikipedia editor/writer and bowed out. See his “4 Reply to your lengthy post” (31 Aug) at User:Vejlefjord talk). He rated my first attempt as “really bad” and rewrite as “outright atrocious.” Very different than his earlier “I think the article has potential and it would be a shame if it couldn't be used.” My lengthy post is on User:Vesal (talk). The first section responded to Vesal’s edited lead and the second section was on my reflections after browsing “Wikipedia:About” and various Discussion pages concerning what I am trying to do for Wikipedia — might be of interest as a look inside the brain of a new volunteer. (One thing I read was a discussion on your talk about how many citations should be used. I have some thoughts if you would like me to post them.)
If you also decide that I am hopeless as a Wikipedia editor/writer, feel free to let me know. I can drop the project and chalk it up to an adventure in the exotic world of Wikipedia. I sometimes feel exasperated by what feels like a Wikipedia priority of form over substance.
A question: am I allowed to delete my first attempt at “Theodicy and the Bible” — take it back, so to speak? Vejlefjord (talk) 00:05, 3 September 2010 (UTC)[reply]
Sigh, I seem to have some problems to communicate. I was talking about the current theodicy article, and not his draft. The early theodicy and problem of evil articles on Wikipedia were really bad; most of it was folk arguments, rather than a coherent exposition of the philosophical literature, and the two articles heavily overlapped. As a solution, the theodicy article has been stubbed down to a list of dictionary definition. So to make this clear: the current theodicy article is outright atrocious!! I said this because I felt Veljefjord is looking at bad examples of how articles should look like, and I wanted to suggest that he rather use the featured articles as models. I have not changed my opinion about theodicy and the Bible, which is far better than anything we have currently on the topic. Vesal (talk) 12:46, 3 September 2010 (UTC)[reply]
Vesal, I am glad I misread your post. I had asked you what version of “Theodicy and the Bible” you were looking at, so I read your post as (a) an answer to my question and (b) comments on the rewrite version. I have redone the Lead according to my reading of WP:LEAD. I have also removed several “is defined as” from the article. I will gladly consider other specific suggestions for improvement. Also I am awed by the meaning of your name: http://www.urbandictionary.com/define.php?term=vesal Vejlefjord (talk) 22:20, 4 September 2010 (UTC)[reply]

Nageh, please find my reply on Discussion/talk on User:Nageh/Theodicy and the Bible (draft rewrite). Vejlefjord (talk) 01:43, 9 September 2010 (UTC)[reply]

Clipping path service

Unfortunately,The article clipping path service has been redirected to clipping path according to the administrative decision. Well, obviously I have respect and honor to the decision. At this situation, can I edit the clipping path article by adding content, Sir? Thanks for your consideration. Md Saiful Alam (talk) 03:54, 27 August 2010 (UTC)[reply]

Thank you for the editing in the clipping path service section in the article of clipping path Md Saiful Alam (talk) 08:31, 27 August 2010 (UTC)[reply]
You're welcome. Nageh (talk) 08:53, 27 August 2010 (UTC)[reply]

I found the material completly inappropriate and harmful to the encyclopedia under WP:NOTADVERT and others. Active Banana ( bananaphone 20:55, 31 August 2010 (UTC)[reply]

Please read again what I wrote. (Maybe I was expressing myself not clearly. Sorry.) I support your removal of information in Information security as it was of clearly inferior quality, as I stated myself. However, I scrolled through your contribution list and found that you pursue a rather strong attitude of removing information. In some articles I found that the information removed was both correct and harmless. Nageh (talk) 21:02, 31 August 2010 (UTC)[reply]
Oops, I was thinking of a different article too. Re Computer security all of that content had been flagged for up to a number of years without sources. But back to my general policy, I view leaving content that has not been verified IS harmful becuase many readers assume that they can believe what they read in Wikipedia, and as the incident above shows, that even flagged content can remain in Wikipedia for far to long without appropriate sourcing. Active Banana ( bananaphone 21:11, 31 August 2010 (UTC)[reply]
Honestly, I think that anybody reading an open encyclopedia as an intrinsically reliable source can't be helped. The problem with removing what I call harmless and not doubtful material is that it may result in a loss of valuable information. For one, anonymous edits virtually never provide sources. If we delete all their edits right away, we may just as well ban them. Second, I personally edit at times without providing sources, mainly to correct or extend some information which I know to be correct (because it is in my area of expertise) but which I couldn't provide a reference right away (because I'd need to look up a book at my university library for example). Considering this I think that editors should be given some leeway for articles still under construction when the topic is not susceptible to libelous or discreditive edits (such as purely scientific topics). BTW, I was referring to your removals at Indore. Nageh (talk) 21:44, 31 August 2010 (UTC)[reply]
Not those latest edits at Indore I mean... I am referring to these two edits: [1] and [2]. While poor in style (non-native speakers, obviously) the information was mostly correct. Nageh (talk) 22:19, 31 August 2010 (UTC)[reply]
I think we will have to agree to disagree. You can keep with a looser application of WP:V and I will continue with my more stringent version. Active Banana ( bananaphone 22:35, 31 August 2010 (UTC)[reply]


Please stop inserting the same reference all over as you did with your recent edits. This is considered spamming, as the reference was neither suitable nor adequate in articles not discussing authentication, and for the Authentication article it did not adequately present a more general definition or outline of authentication, for which entity authentication is only a subset. If you would like to post a reply, you may do it here or on my talk page. Thanks, Nageh (talk) 17:34, 31 August 2010 (UTC)

I have read the article and I am pretty sure that the reference does describes "entity authentication" very formally and scientifically. As far as your concern that "entity authentication is only a subset of authentication", it's not really correct. The word authentication is a ambiguous term that can be used either for "Entity Authentication" (or User Authentication if you may call it), or for Message Authentication. Since, the article under question have nothing to do with "Message Authentication" so it is very appropriate not to delete my so called spammed reference. —Preceding unsigned comment added by 130.225.71.18 (talk) 10:15, 1 September 2010 (UTC)[reply]

I completely disagree with all of your points. Does it help the reader? Yes it warns them that the content they are reading is not up to Wikipedia's standards and they should take it with a grain (or more) of salt. Does it help editors? Yes, editors wishing to improve article can use the tag categories to find articles about topics that interest them that have been identified by others as needing help. Is it merely allowed by policies? No, it is encouraged by policies for the above reasons. Active Banana ( bananaphone 17:40, 18 September 2010 (UTC)[reply]

I have great confidence that I will not be able to change your opinion and know that you will not be able to change mine and so do not feel that continuing this discussion will be productive use of either of our time. Have a good day.Active Banana ( bananaphone 18:00, 18 September 2010 (UTC)[reply]
Note: I left 2 notes at User talk:Active Banana#Authentication. That's all. :) -- Quiddity (talk) 20:56, 19 September 2010 (UTC)[reply]

World Science Festival FAR

Hi Nageh - Would you mind revisiting your nomination of World Science Festival (page at WP:Featured article review/World Science Festival/archive1) to see if your concerns have been resolved? Thanks, Dana boomer (talk) 13:41, 11 October 2010 (UTC)[reply]

Please, tell me what you want me to do.

Nageh, I suspect you would like to put the “Theodicy and the Bible” project behind you, and I would also. But I don’t know what you want me to do next. Let me recap situation as I understand it:

  • You and Vesal kindly encouraged me to try rewriting the userfied article in Wikipedia style (which I tried to do).
  • You both were also kind enough to critique my rewrite’s conformity to Wikipedia style. Your verdict was that it needed work.
  • You (Nageh) were kind enough to “adopt” the rewrite and give it a home on your user-page for doing the reworking needed. (I fear that I failed to thank you for taking the project into your hands where it belongs because you are the expert on Wikipedia procedures.)
  • You re-sectioned the article. (Unless you ask me to, I will gladly not raise the questions that I had about it.)
  • You also inserted bold-comments for me to work on: a task I said I’d complete after the Lead reworking was settled.
  • The Lead. On 11 Sept, you wrote that you would leave the Lead to Vesal and me. That collaboration seems to be on hold or ended? The last entry on Vesal (talk) was 4 September. And Vesal has not answered my 15 Sept entry on User:Nageh/Theodicy_and_the_Bible_(draft_rewrite). If you think Vesal’s draft is OK, I will happily forget about the Lead and work on your bold-comments. (I wrote my draft only because I had some questions about Vesal’s conformity with what I read in WP:LEAD.)
  • I take it that when (or if) the article is “ready to go live,” it will come from you, so I see my role as trying to do what you want me to do.

So, please tell me what I you want me to do next. Or ask me to clarify what I have written. Vejlefjord (talk) 19:16, 24 October 2010 (UTC)[reply]

Vejlefjord, it's good to see you haven't completely given up yet. First, thanks for the flattery of calling me a Wikipedia expert, though I'd rather say I'm just experienced. In contrast, I'd call you the expert on the subject of theodicy, which is why I left rewriting the lead to you and Vesal. Unfortunately, Vesal seems to be rather busy, and my time is quite limited currently as well.
You were mentioning some other person who you'd have liked to call for help. Maybe this is a good time to do so? In any case, if this article is gonna go anywhere soon, I think it should be somebody with some time devotable to this article, at least remotely familiar with the subject, and at best also a native English speaker (which I'm not).
Regarding my re-sectioning effort, feel free to raise your concerns, this is a community effort not a dictatorship. Anyway, your subtle criticism clearly demands some justification from my side. When I was reading your article first, as someone previously completely unfamiliar with the topic, I had serious troubles in understanding, and I had to go through it several times to be able to interconnect the various concepts you laid out in subsections of "God and evil in the Bible" and "Bible and theodic issues", and to understand what really are the theodic arguments, justifications, as opposed to examples and conflicting interpretations. This process was complicated by the myriad of sub- and subsubsections, which -- as I said previously -- are presented rather bullet-style, and thus may be more appropriate for overhead slides with a teacher (professor) explaining them.
So the first thing I did was trying to bring the concepts in order such that someone unfamiliar with the subject (like me) would have the two important issues concerning the rest of the article clearly laid out at the start, which -- as I understood them -- are "Examples of evil and God's role therein/God and evil" and "Conflicting interpretations". The first section also tries to clearly motivate the need for a theodicy, leading over to other sections in its last paragraph.
The second thing I did was getting rid of the too many subsubsections, and instead add some prose. This helps tremendously when you're studying from paper instead of from slides with a live teacher.
You will notice that I stopped somewhere in the Positive Theodicy section because I failed to comprehend part of it, and really it took me quite some time to do all this.
If you think I got something wrong or the quality of your article got degraded by this actioning of mine, speak out. It may be because I did not correctly understand something, because I'm not an expert on the subject, or simply because I'm not a native speaker.
Nageh (talk) 12:26, 26 October 2010 (UTC)[reply]


Nageh, thank you for your reply. Here is what I am doing and thinking in response:

  • I appreciate how much improvement you made in your rewriting. (The many sub-sections I had were based on what seems to have been two misconceptions: (1) that I needed to do so for increased “accessibility” and (2) that every section should be able (like the Lead) to stand on its own.)
  • I have cut and pasted the current draft into my computer. I am making a very few small changes in what you did, but I am making many more in what I wrote — mostly by deleting material that is not essential to the central points in order to shorten and simplify the article. Wiki instructions call for surveying the “experts” on an articles topic and presenting a concensus. But there are thousands of “experts” on the Bible and theodicy with countless differing positions. I think have tried to work in more material from my sampling of 100+ “experts” than a Wikipedia article can bear.
  • I plan to do the shortening and simplifying work on what I wrote all the way through to the end of the article.
  • That you are “not an expert on the subject” and “not a native speaker” are (in my mind) assets for getting the article in good shape. It helps you spot religious jargon and infelicitous verbage.
  • Rather than investing your time in trying to clarify my unclarities, I sugest that you just tell me that something is “unclear” and I’ll try to clarify it. In this suggestion, I am trying to honor your statement about your time: “my time is quite limited currently.”
  • When I have completed what I am doing ASAP, I would like to offer it to you for critque. (I am not suggesting that you invest the time required for further rewriting unless you want to.) Could you set up a User:Nageh/Theodicy_and_the_Bible_(second_draft) or tell me the best procedure for posting it?
  • After the body of the article is done, we can get back to the Lead, a task that should not take long.
  • You said that I had mentioned “some other person who you'd have liked to call for help.” It is not that I wanted to call in someone else; it is that both you and Moonriddengirl have suggested it. My idea is that when you think the article is ready to “go live,” we can ask one or two members of the WikiProject_Christianity to take a look.

Please let me know if you have other ideas for proceeding. All the best. Vejlefjord (talk) 16:44, 29 October 2010 (UTC)[reply]

That sounds like a good plan. I have set up User:Nageh/Theodicy_and_the_Bible_(second_draft) with the content of my draft rewrite. Feel free to edit/replace as you wish.
Nageh (talk) 22:01, 31 October 2010 (UTC)[reply]


Nageh, I have posted the results of the work I said I would do on User:Nageh/Theodicy_and_the_Bible_(second_draft). For expediency, I simply replaced what was there, knowing that you can retrieve it. I have prefaced this “second_draft” with an explanation of what I did and rationales from Wikipedia guidelines.

The work took longer than I had anticipated because I spotted more things that needed saying to give a more accurate and clearer view of the state of the subject in the writings of “experts.” But, not being allowed to write from the knowledge of the subject in my brain, I spent many hours finding books with words that said what needed to be said. If I were writing with no knowledge of the subject except what I had gleaned from a sampling of books (as in a term paper in theological school), the task would be much easier.

I will check back on User:Nageh/Theodicy_and_the_Bible_(second_draft) Discussion/Talk in a few days to read your responses to what I posted. All the best. Vejlefjord (talk) 02:50, 16 November 2010 (UTC)[reply]

Deprecated template

Not me I actually am not responsible for deprecating {{cleanup-section}}; I'm just replacing its transclusions. Furthermore, I know as much about the rationale as you. If you really want me to investigate it, I guess I can, but I'm just going to replace it and nominate it for deletion (or redirection). —Justin (koavf)TCM☯ 22:10, 1 November 2010 (UTC)[reply]

I see. Well, feel free to do whatever you think should be done. But we really should not advocate use of Cleanup-section on the Template:Cleanup page then. Nageh (talk) 09:48, 2 November 2010 (UTC)[reply]

Tip

Hi!

About this, I think you can request the deletion by inserting both the template and the Category:Candidates for speedy deletion to the js page. Something like this should work:

//{{db-userreq}} [[Category:Candidates for speedy deletion]]

(the page will be displayed in the category, even if it doesn't appears in the bottom of the script page, and doing this after // a JS comment, there will be no js syntax erros if the page is still imported somewhere). Helder 18:57, 7 December 2010 (UTC)

Thanks! That looks good indeed! Nageh (talk) 19:50, 7 December 2010 (UTC)[reply]

Regarding my reverts for P_versus_NP_problem

Did you read my proof for P_versus_NP_problem? What part do you disagree with? Vivek (talk) 13:34, 10 December 2010 (UTC)[reply]

Your proof does not make sense at all. Nageh (talk) 13:46, 10 December 2010 (UTC)[reply]
Okay, not trolling here, and apologies for messing up the chronology of this page. Since you seem to be more knowledgeable than me (and your Wikipedia contributions confirm your ability, would you be so kind as to tell me where I've messed up. I've added a Proof by Contradiction to clarify https://sites.google.com/site/pnproof/.

Regards and thanks for your time. Vivek (talk) 17:42, 10 December 2010 (UTC)[reply]

MathJax at Village Pump

Someone's brought up MathJax at WP:VPT#Wikipedia Mathematics; you might like to join in as the one that's been working on this here.--JohnBlackburnewordsdeeds 01:12, 26 December 2010 (UTC)[reply]

Thanks, I posted my comment here. Nageh (talk) 09:25, 26 December 2010 (UTC)[reply]
There is also a new thread on wikitech-l you may be interested ;-) Helder 22:30, 19 January 2011 (UTC)[reply]

CSI effect

Hey mate, thanks for the feedback on CSI effect. Some of the points you raised seemed to be more focused on forensic evidence itself rather than the CSI effect. Nonetheless, I have added some new material in an attempt to address your concerns. I would greatly appreciate it if you would have a second look. Thanks! --Cryptic C62 · Talk 23:06, 26 December 2010 (UTC)[reply]

IPv6 reverting

You know it's because of commie liberals like yourself that Conservapedia exists. Why does everyone hate on the high quality journalism of Fox News? —Preceding unsigned comment added by 213.155.151.233 (talk) 23:17, 26 January 2011 (UTC)[reply]

Funny that you call me like that. But since I am working in the IETF myself I think I know better than Fox News. And FYI, I'm not American, so what you intend as an insult does not work for me. Good bye, Nageh (talk) 23:22, 26 January 2011 (UTC)[reply]
OK... but just for my edification, what's the other country besides the States that doesn't understand sarcasm? 213.155.151.233 (talk) 00:03, 27 January 2011 (UTC)[reply]
:) Nageh (talk) 09:03, 27 January 2011 (UTC)[reply]
Just as an actual comment to the quality journalism of Fox News: [3] ;) Nageh (talk) 18:47, 31 January 2011 (UTC)[reply]

Hollow. I translated Absolute convergence to Japanese Wikipedia. (ja:絶対収束). In this work, I noticed that proof in section 3.2 is wrong. Then, I rearranged the proof. Please do double check of the proof. --Loasa (talk) 08:27, 28 January 2011 (UTC)[reply]

Listed

You've been listed.[4] I noticed your cryptography on your user page. When I was 14-17, Turing's buddy Alonzo Church was my mentor. :) PPdd (talk) 23:02, 22 February 2011 (UTC)[reply]

Well, thanks for the honor! Yeah, cryptography is part of my daytime research. Nice that you have known such a celebrity!! :) Nageh (talk) 23:48, 22 February 2011 (UTC)[reply]
Not celebrity, personality. PPdd (talk) 02:20, 23 February 2011 (UTC)[reply]

Hello Nageh!

I have red your note to RFT where you recommended to place this article to Wikiversity. I am not an expert user on Wikipedia that's why I didn't know Wikiversity before. I will see it later if I have time and maybe I will decide to move this article to Wikiversity. Regards, --Laszlohajdu (talk) 08:47, 28 February 2011 (UTC)[reply]

Foreign language

As you know, I am a new editor (edit counts are artificially high, only a handful of articles edited and almost all were in recent weeks), so I cannot foresee policy ramifications experienced editors have already seen. But I do not understand your reasoning for not suggesting (not mandating) translation for readers. A case in point is when I was reading papers of a Chinese first year mathematics doctoral student at Stanford. As the only doctoral student chosen from China, she certainly had not cause to fake anything. She would make huge jumps in reasoning no one else made, then when questioned, she would cite one of her Chinese text books and show it to me. I made her at a minimum copy and translate the conculsion of the theorem she was citing in English, though not the proof of the result, so I could minimally verify it. The situation here is almost identical. Your argument that there is background information ordinary readers don't have may be another problem, but it shouldn't stop fixing the separate problem of verifiablity by English speakers, especially those who are experts in the area of the article, who only lack an ability to verify in another language. And the chance of error in translation, or interpretation, from a foreign language source is on top of any other problems you cited that may exist pertaining to matters other than translation. So I do not understand your reasoning not to at least partially fix this translation-verifiablity problem with a suggestion (not mandate), even if the other verifiablity problems are not addressed. PPdd (talk) 22:10, 4 March 2011 (UTC)[reply]

The main objection is this: (1) If you, as a reader, do not trust an article statement then you cannot trust the translation provided either. Arguing that it is needed to verify that the source material is interpreted or summarized correctly is ridiculous, IMO. To this come two more issues: (2) Many statements are summaries from many sentences, paragraph, or even entire articles. For example, when I add information from newspapers to an article about a person or a company I usually extract a single or two sentences from the entire newspaper article, which are usually not to be found in such condensed form in the original source. Am I supposed to provide the translation of the entire article? Paragraphs? (3) Even with a trustworthy translation the sentences may be interpreted out of context, and reliable portrayal of the source may require substantial translated text. As long as I do not see where and how not providing a translation could lead to concerns regarding reliability of an article text I stay at my position. Nageh (talk) 23:22, 4 March 2011 (UTC)[reply]
(Pre-PS: I am discussing on your talk page for focus, because I do not want a large block of exchange distributed in the middle of hopefully pithy comments by others, and because you seem to reason in a way I find intelligible.) You are pointing to examples of where suggesting brief copy and translate will not work, but this only argues that the proposal may only partially solve the lack of perfect verifiability problem. Here is an example where it does help, and your points 1 and 2 would not be objections. Suppose in my Chinese grad student example I was only in correspondence with the grad student. She would then have to copy the brief theorem statement (not the proof) in Chinese script, and translate it into English. If she copied from an online text, I would have no idea what the script symbols were, but I could verify that it was actually a copy of what was online. Without the copy, I would have no idea how to even begin to look up any specifics online. I could run translation software to at least approximate a translation, and figure out the statement of the theorem, which if I did not know about, at least I could read it as to its being at least plausible. If it was not available online, the verifiability would diminish (as to its being in the text), but it would still be more verifiability than nothing, as I could run her copy in some translation software and roughly check her translation. I could then at least evaluate the theorem's plausibility, and I would have at least some verifiability. There is a case in which your (1) does not stop at least some verification, even if it is not perfect. Regarding your point (2), a large block copied would not be allowed under copyright, so there would be no brief copy and translation. But that does not mean in the cases that a brief passage could be copied and translated, this amount of verifiability is not better than none. So the proposal is not a complete solution, but it is better than nothing. Also, I think you are failing to distinguish between suggesting the proposal be followed, and mandating it. PPdd (talk) 00:35, 5 March 2011 (UTC)[reply]
This is fine. I copied it to the WP:V page because I considered it to be of interest for others as well. Now to your example. Obviously you could verify her source because she presented the translation within a context that she couldn't have made up herself. But this is exactly the problem: You need to present original statements within their context to make verification anything like reasonable. This may work for a few specific cases but for the majority of cases this will not be practical due to the amount of translated text required. This means that a translation may be provided if an editor thinks that it can improve verifiability or if another editor thinks so and requests a translation, and this is within reasonable bounds. I'm fine with something like this, but this is already said in our current policy. Nageh (talk) 14:30, 5 March 2011 (UTC)[reply]
  • (1) I "added" points #3 and #4 to my "support" vote at the top. Point (3) argues for including the proposal only as a policy "suggestion" that is helpful to general readers. Point (4) argues for including the proposal as a policy "mandate".
  • (2) I like bullet points (and when they are numbered when greater than 3), and am glad to see an editor like you who uses them.
  • (3) I just wrote the Ridiculousness article. Your "Arguing that it is needed to verify that the source material is interpreted or summarized correctly is ridiculous, IMO" comment is covered under the Neitzsche invective example, and some of your comments, and most of the comment of others, under the reductio ad absurdum sections. You might want to pull that particular line, and supply further argument. A reason it is not ridiculous is my own repeated good intentions but errors of interpretation, in which cases I have to eat crow and change it when other editors try to verify my interpretation and find errors. A case in point is an interpretaion of a translation regarding the AfD on Informacion Filosofico. I found a source showing it had been around since 1945, arguing for some notability, as this is a long time for a philosophy journal to survive (or any journal), but it turned out my tranlation and interpretation was in error, as there was a different IF back in 1945, and the journal in question was started in 2005. A translation of the source would have fixed this straight out, but it only ended up even being noticed and fixed because it went to AfD.
  • (4) Regarding "statements may be summaries from several sentences, paragraphs, or even an entire (e.g., newspaper) articles. Are you supposed to provide the translation of an entire article? Paragraphs?" No. Verifiablity is a matter of degree confined by reasonability. It can be increased for some examples, but there will always be counterexamples. That does not mean to give up when it can be improved for some cases, but not imroved for all.
  • (5) Re "Assuming you provide a partial translation, how can you guarantee that the text is not interpreted out of context? By providing more translation again? How much more?". That is an argument by infinite regress. It would argue against ever providing a definition, reduction, analysis, or improvement, because there is always another step not provided for. Stephen Hawking told of Bertrand Russell discussing the heliocentric model replaing the Ptolemaic model, and the Ptolemaic model replacing the flat earth model. A little old woman stood up in the back of Russell's audience and said, "You're wrong, the world is flat and rests on the back of turtle." Russell responded, "then what does the turtle rest on?" The woman replied, "another turtle". Russell asked, "what does that turtle rest on?" The woman replied, "It's turtles all the way down." This reminded me of Dr. Suess' cover for Yertle the Turtle. I figured out how to shut that woman up when she was just a little girl asking, "But mommy, if God created everything then who created God?" The answer is simple. The Clown created God. Who created the Clown? Well, the Clown had her period really bad one week, lasting for six bad days and causing great constipation. On the seventh day, she finally had a bowel movement, giving birth to herself by shitting herself out. This explains why there is a 7 day week and 28 day lunar and menstrual cycles, as 28 is a nice multiple of 4. It also shuts up that little girl's infinite regress argument. I hope is works on you, too. :) PPdd (talk) 15:11, 5 March 2011 (UTC)[reply]


  • Assuming you provide a partial translation, how can you guarantee that the text is not interpreted out of context? By providing more translation again? How much more?

PPdd (talk) 15:11, 5 March 2011 (UTC) PPdd (talk)[reply]

I think I made it clear that there may be specific cases where an additional translation provided may be helpful, which is already covered by our current policy. Ad (3): There are two questions. Why, as a reader, do you look up a source that is referenced within an article? Because you want to verify the editor did his job correctly? Or because you want to know more about a specific statement? Normally, I assume, it is the latter, in which case I will need to take the original (full) source. However, if I really doubt that a referenced statement is correct then it may be helpful to have a translation of the text in question. But then I must ask myself, why should I trust the translation when I don't even trust the statement in the article? As I said there may be very specific cases where it is helpful, but this can be dealt with on a case-by-case basis (following discussion and consensus). Ad (5), it was a different way of asking "Where do you draw the line?" Quoting one or two sentences may be fine if they are not taken out of context. In the majority of cases I assume this will not work as simply as that.

Again, I do not say that providing a translation is completely pointless, but I think out current policy wording does it. Nageh (talk) 16:00, 5 March 2011 (UTC)[reply]

NWP FAC

I have tried to address the concerns you brought up. How does it look now? Titoxd(?!? - cool stuff) 22:06, 4 March 2011 (UTC)[reply]

Just a note, I declined the speedy deletion request. "Patent nonsense" means incomprehensible gibberish, see WP:PN. The article was clearly written and the subject is understandable (it's a complex encryption scheme that involves physically moving characters around each other like a solar system). It's also something the author made up in college some time in the last week, so I'm proposing it for deletion. I just thought I'd let you know, thanks. -- Atama 23:03, 10 May 2011 (UTC)[reply]

"Patent nonsense" was the best I could find among the list for speedy deletion. It's coming close, even when written comprehensively, sorry. :) But anyway, it was not really appropriate in the end, so I dropped the editor a note on his talk page. Well, thanks for changing it to the correct deletion proposal then. Nageh (talk) 06:52, 11 May 2011 (UTC)[reply]

Nageah, Many thanks for your excellent feedback on my draft on authentication. I will re-work this, hopefully over the next few weeks.FrankFlanagan (talk) 19:43, 24 May 2011 (UTC)[reply]

Woo Lam protocol afd discussion

Negeh, in you comments regarding the article you have quoted some line to be changed. ("public-key encryption with a private key" ,"signing with the public key") i have not found these lines to appear in the article. could you maybe clarify as to what you meant. also i would like to thank you for all your assistance. mike. — Preceding unsigned comment added by Mike2learn (talkcontribs) 10:36, 26 May 2011 (UTC)[reply]

why revert?

May I ask why you reverted my edit [5]? Is there an alternative to ? I actually required it in a wikibook to avoid the PNG formula, which really is plainly ugly when it appears inline of text. --Martin Kraus (talk) 05:41, 13 July 2011 (UTC)[reply]

Sorry, I wanted to add an edit summary but the script that would allow me to do this seems repeatedly broken. The reason for the revert is that you should not use hacks to effect an intended rendering style. If you want to force HTML then use HTML tags: x23. There is also {{Template:Math}} for those who like it. Note also that apart from your math hack being semantically wrong there is also the danger that a line break will occur in the middle between the 2 and the 3. Finally, you might be interested in MOS:MATH. HTH, Nageh (talk) 07:31, 13 July 2011 (UTC)[reply]

A barnstar for you!

The Technical Barnstar
Thanks for your coding know-how that enabled the project to produce {{starred}}. I know it might seem trivial to you; but it seems impressive to most of us. Good work! I look forward to seeing what else you come up with. Fly by Night (talk) 00:01, 23 July 2011 (UTC)[reply]
Wow! Thank you very much for the honor! Nageh (talk) 08:57, 23 July 2011 (UTC)[reply]

What is so presumptuous?

Why did you say the nom was "presumptuous"? Maury Markowitz (talk) 14:02, 26 July 2011 (UTC)[reply]

Because the article is nowhere near FAC status. Please, if you open a peer review, asking for others' efforts in reviewing, don't close it prematurely. Anyway, don't take it too personal... Cheers, Nageh (talk) 15:18, 26 July 2011 (UTC)[reply]
Yes it is very close to FA. The only remaining issue was a point of formatting where I disagreed with the reviewer's understanding of the MoS. I did not close the PR, it did it automatically due to a problem with the automation. Perhaps you could take the time to be sure you understand exactly what happened before passing judgement on my actions. Maury Markowitz (talk) 01:38, 29 July 2011 (UTC)[reply]
Perhaps you could take the time to react to my comments I left at your PR, as I have indicated already on SandyGeorgia's talk page? Up to this time you have decided to ignore my comments for your close-to-FAC article!!! Nageh (talk) 07:57, 29 July 2011 (UTC)[reply]
I'm sorry, I do not see any, and it doesn't seem that you made any edits. Maury Markowitz (talk) 12:59, 29 July 2011 (UTC)[reply]
Sigh. This is the link to the FAC nomination. The link I provided at SandyGeorgia's talk page is to the Peer Review (PR): Wikipedia:Peer review/Space debris/archive2 (copied from SandyGeorgia's talk page). I am aware that you probably don't have this page on your watchlist but really you should have followed the link when I posted it. Nageh (talk) 15:55, 29 July 2011 (UTC)[reply]

Anonymous-key cryptography

Hi Nageh, I have revised the article on Anonymous Key Cryptography. Is it still too specific with the algorithm exposed? a closed loop system which uses user credentials and bypasses the PKI system is still generic and should merit an entry somewhere. I had written a generic article on anonymous-key cryptography under the general heading Modern Cryptography under the PKI entry. It has been deleted. I am including it here for your consideration and advice.

Anonymous Key Cryptography There is another way to authenticate users, transactions and files called anonymous-key cryptography. Anonymous-key cryptography takes advantage of the rapidity of symmetric key encryption in conjunction with a specific software application to produce extremely high levels of security, generating key strengths of over 256 bits. In this system there is a single key – generated using AES and user credentials such as a user-id and password, smartcard, biometric, or a combination thereof. That same key encrypts and decrypts information in a closed-loop system that does not rely on third-party authentication.

There are no handshakes or key exchanges in anonymous-key cryptography. There is only one key and the software on both ends that recognizes key validity and provides encryption or decryption services. In this system, for example, an online purchaser would browse to the web site of a seller. To ensure security for the buyer, the seller has integrated an anonymous-key cryptographic application into their transaction service. The purchaser receives a small java applet which asks them to provide their credentials during the normal course of account creation. The seller’s system then uses this information in a cryptographic module to create a very strong key for all transactions. The seller’s system saves the key in encrypted form (using the same cryptographic module) so it is protected from any but a brute force attack and then is vulnerable only if the physical server it resides on is stolen. Since only user credentials are exposed, the key remains safe from spoofing, replay, and man-in-the-middle attacks. Even if a key-logger captures the credential, the key – because it is generated elsewhere using AES -- remains safe. Hdrugge (talk) 21:24, 28 July 2011 (UTC)[reply]


Hi Nageh,

I will work on fixing the article - thanks for your feedback.

207.6.253.182 (talk) —Preceding undated comment added 17:55, 29 July 2011 (UTC).[reply]

Coding theory query(very sorry really)

User:DVdm passed through the alternative account for this User (i.e.Neutralcurrent) and noted edits considered vandalism.This edit (Coding theory)didn't seem like vandalism either, although there were two elsewhere which needed attention.In conclusion pressed rollback invesigating this function.Drift chambers (talk) 16:36, 29 August 2011 (UTC) Retrieved from "https://secure.wikimedia.org/wikipedia/en/wiki/User_talk:Drift_chambers"[reply]

82.36.156.186 is my IP but the screen showed was username logged on, so was a genuine mistake rather than an effort at surreptitiousness,was intending to retain linksDrift chambers (talk) 11:09, 1 September 2011 (UTC)[reply]

Srinivasa Ramanujan

I did not intend to reinstate the text (although if you'd wanted the answer to that question, you should have asked me), I tried to close the faulty div that Gadget removed just after. The edit conflict situation clearly ruined that plan. Please, I don't deserve three question marks. Grandiose (me, talk, contribs) 16:12, 30 August 2011 (UTC)[reply]

Ok, my bad, I should have assumed more good faith, didn't consider an edit conflict. Sorry, and cheers, Nageh (talk) 17:03, 30 August 2011 (UTC)[reply]
OK, keep up the good work you appear to be doing! Grandiose (me, talk, contribs) 17:13, 30 August 2011 (UTC)[reply]

Cryptography

Oh no, that was meant for Drift chambers. Sorry if I gave the wrong impression. Hut 8.5 11:07, 1 September 2011 (UTC)[reply]

Ok, thanks for clarifying. Nageh (talk) 11:09, 1 September 2011 (UTC)[reply]

Please comment on 2011 Southern US drought

Responding to RFCs

Remember that RFCs are part of Dispute Resolution and at times may take place in a heated environment. Please take a look at the relevant RFC page before responding and be sure that you are willing and able to enter that environment and contribute to making the discussion a calm and productive one focussed on the content issue at hand. See also Wikipedia:Requests for comment#Suggestions for responding.

Greetings! You have been randomly selected to receive an invitation to participate in the request for comment on 2011 Southern US drought. Should you wish to respond to the invitation, your contribution to this discussion will be very much appreciated! However, please note that your input will carry no greater weight than anyone else's: remember that an RFC aims to reach a reasoned consensus position, and is not a vote. In support of that, your contribution should focus on thoughtful evaluation of the issues and available evidence, and provide further relevant evidence if possible.

You have received this notice because your name is on Wikipedia:Feedback request service. If you do not wish to receive these types of notices, please remove your name from that page. RFC bot (talk) 21:05, 8 September 2011 (UTC)[reply]

Common vs proper nouns in telecom

Hi, I'll revert the ones you've named on my page; can you help me to understand what absolutely must be uppercased in telecom? It's well-known for uppercasing any noun in sight, like a scattergun. What is a good rule of thumb? Anything that ends in these words?

Can you add to them? Tony (talk) 12:43, 12 September 2011 (UTC)[reply]

It's true that telecom folks tend to uppercase about every noun describing a technical function. Nouns should be upper-cased whenever it describes a specific standardized function or protocol, but it should be written in lower-case letters when it refers only generally to some functionality. Terms ending with Protocol usually are specific solutions and should be upper-cased. Terms ending with network usually describe classes of solutions, and should be lower-cased – I would also lower-case generic access network therefore. On the other words you mention I think these would need to be handled on a case-by-case basis. Nageh (talk) 13:04, 12 September 2011 (UTC)[reply]
"High-speed packet access"... can't revert to upper case with hyphen, which is a bore. It will take upper case without hyphen, and lower case with. Do I need to do a request a move?
Multi-Protocol Label Switching—I removed the request, but there is a thread on the talk page asking whether it's a protocol or not, just to emphasise how hard it can be to tell whether a topic is just the uppercasing of a common noun (process, technique) because it's abbreviated with caps (disapproved of by WP), or whether it really is a proper name. "Frame relay" says its a technology; I see now you need to read on to learn that it's a protocol. OK, care needed. May I dump some titles here for you to tick or cross when in doubt, which looks like it might be much of the time? Tony (talk) 13:09, 12 September 2011 (UTC)[reply]
One more I've downcased that I'm now concerned about: System architecture evolution ... it's not a standard itself, but a core architecture of a specific standard. I won't pester you again (today). Tony (talk) 13:14, 12 September 2011 (UTC)[reply]
Yes, you would need to request a move if the new title is already assigned.
MPLS is a protocol, I replied on the talk page.
I would think that System architecture evolution should be upper-cased, though I agree it is sometimes difficult to tell.
HTH, and thanks for coming back to me on this. Gotta go now. :) Nageh (talk) 13:22, 12 September 2011 (UTC)[reply]
Nageh, I've raised the matter of the sorry state of WP:Manual of Style/Computing, at the Wikipedia_talk:Manual_of_Style#WP:MOS.2FComputing. The copyright issues, BTW, are not germane to the point I'm raising, but I do think they need a mention in the MoS subpage. Are you in a position to advise on proposals for inclusion/change in this MoS subpage? Is it on your watchlist? Tony (talk) 01:54, 14 September 2011 (UTC)[reply]
Sorry, busy at the moment, no time to look at it. Maybe later. Nageh (talk) 20:30, 16 September 2011 (UTC)[reply]

Your advice?

Hi, could you let me know if this shouldn't be downcased?

Integrated Services Digital Network. Tony (talk) 03:52, 16 September 2011 (UTC)[reply]

It refers to a specific standard, so it must be upper-cased. Nageh (talk) 20:18, 16 September 2011 (UTC)[reply]

It was patented originally, but I believe it can be reasonably assumed to have passed into generic use. Please let me know if I shouldn't proceed with a move request. Tony (talk) 02:26, 25 September 2011 (UTC)[reply]

I am not convinced that "digital audio tape" has entered standard usage. At best, it would be a synonym for the standard, i.e., people say "digital audio tape" but mean the tape based on the specific technique "Digital Audio Tape". What about the Compact Disc? Isn't it still put in upper-case letters? While I don't think people actually say "compact disc" rather than "CD" I don't think they would put it in lower-case letters either. Did you do a Google book/scholar check on the spelling? It seems upper-casing would still be appropriate. After all, there have certainly been competing digital audio tape technologies, but the one got named as such. Compare Compact Cassette (specific and a trademark) with compact audio cassette (generic). Digital Audio Tape is (or was) a trademark as well. I would leave it upper-cased. Nageh (talk) 17:27, 25 September 2011 (UTC)[reply]

Remote Operations Service Element protocol move revert

I did not revert your move. I moved the page per the discussion and before I could get it closed the page was moved. I just figured that I someone how dropped a word in entering the new title as my edit summaries stated. And I believe that you supported the move I did. Vegaswikian (talk) 23:28, 27 September 2011 (UTC)[reply]

Hi, any reason this tool can't have a small "e"? I'm still learning about this difficult issue. Tony (talk) 13:49, 1 October 2011 (UTC)[reply]

It is the name of the software, and not about enablement of Aba CM. But don't worry, I have sent it to AfD. Nageh (talk) 20:43, 1 October 2011 (UTC)[reply]
OK, thanks. Hmm ... it underlines why capitalisation needs to be cleaned up: if used frivolously, the casual reader/editor can't really tell whether it's the enablement of Aba CM (which is how I took it). If capitalisation is used consistently, we won't doubt when we see it. Tony (talk) 04:00, 2 October 2011 (UTC)[reply]

New feature doesn't work in MathJax

Hello. Could you look at User_talk:Nageh/mathJax#MathJax_prevents_Greek_letters_from_getting_rendered? I've waited eight months for this, and when it's finally here, I find that MathJax may be the thing preventing me from using it. Thanks. Michael Hardy (talk) 20:19, 5 October 2011 (UTC)[reply]

Try it now. Btw, no need to ping me on the talk page, I am watching the mathJax page and I don't have mail notifications activated. Nageh (talk) 20:33, 5 October 2011 (UTC)[reply]

Works now. Thank you! Michael Hardy (talk) 21:59, 5 October 2011 (UTC)[reply]

Verifiability essay

It is now very obvious that many editors do not understand the point of the "verifiability, not truth" policy. I think your comment at the RfC was very clear and insightful. As you may know we have an essay: Wikipedia:Verifiability, not truth, meant to explain this phrase to newcomers. Anyone can edit an essay, and I just made some edits to try to make it clearer but (as with just about anything at WP) I think it can still stand improvement. Would you read it over and see if you can improve it any? If the policy changes the essay will have to be changed, but in the meantime, it ought to help newcomers understand what we mean, and should explain things as clearly as possible Slrubenstein | Talk 15:09, 7 October 2011 (UTC)[reply]

Thank you for the flattering comment. I'll see what I can do as soon as I find some more time. Nageh (talk) 12:29, 8 October 2011 (UTC)[reply]

I didn't understand the page : LOGARITHMS

Hi,Nageh can you please send a simpler version of the page: logarithms on my talk page? talkWill Gladstone (talk) 08:53, 13 October 2011 (UTC)[reply]

Can you help me

I am writing an article on mathematics.Please will you guide me on how to write mathematical equations and insert photos in it?or you can give me its link.talkWill Gladstone (talk) 08:33, 16 October 2011 (UTC)[reply]

Office Hours

Hey Nageh! I'm just dropping you a message because you've commented on (or expressed an interest in) the Article Feedback Tool in the past. If you don't have any interest in it any more, ignore the rest of this message :).

If you do still have an interest or an opinion, good or bad, we're holding an office hours session tomorrow at 19:00 GMT/UTC in #wikimedia-office to discuss completely changing the system. In attendance will be myself, Howie Fung and Fabrice Florin. All perspectives, opinions and comments are welcome :).

I appreciate that not everyone can make it to that session - it's in work hours for most of North and South America, for example - so if you're interested in having another session at a more America-friendly time of day, leave me a message on my talkpage. I hope to see you there :). Regards, Okeyes (WMF) (talk) 14:27, 26 October 2011 (UTC)[reply]

MathJax integration into stock MediaWiki

Hi Nageh!

I'm looking into improvements to math rendering, of which adapting sane, scalable client-side rendering should give the biggest gain. We'll still need the PNG renderings as a fallback for unsupported browsers, but in the future I expect most readers will be seeing MathJax-rendered equations in their browsers...

Can you give any advice on customizations you've made to MathJax itself or its rendering modes to fit with usage on Wikipedia? Anything I should look out for while heading down this road?

Thanks! --brion (talk) 23:40, 28 November 2011 (UTC)[reply]

Starting a thread on wikitech-l mailing list, will put notes into an RFC page on mediawiki.org in a bit. --brion (talk) 00:46, 29 November 2011 (UTC)[reply]

Nice! For a start, have a look at User:Nageh/mathJax.js and User:Nageh/mathJax/config/TeX-AMS-texvc_HTML.js (the uncompressed parts). More comments tomorrow. Nageh (talk) 01:01, 29 November 2011 (UTC)[reply]

Here is a list of changes I applied to the original MathJax sources:

  1. Custom combined TeX-AMS-texvc-to-HTML/MathML file: User:Nageh/mathJax/config/TeX-AMS-texvc_HTML.js.
  2. Support for texvc commands. See integrated file texvc.js in User:Nageh/mathJax/config/TeX-AMS-texvc_HTML.js.
  3. Heuristic detection of inline vs. display maths. Heuristic used: parent HTML element is "DD" and maths element is the only child, then interpret as "display maths", otherwise as "inline maths". Also, if the maths element is classified as "inline maths" but the element is the first child in the parent "DD", precede the maths with "\displaystyle": this effectively treats the maths as a display-style formula but without the newline of ordinary display maths formulas. See ConvertMath() function in User:Nageh/mathJax/config/TeX-AMS-texvc_HTML.js.
  4. Replacement of initial "\scriptstyle" by "\textstyle" and of initial "\scriptscriptstyle" by "\scriptstyle" in "inline maths" elements. This considers a common hack on Wikipedia, which intends to match the TeX font height to that of the surrounding text.
  5. Hacks to support \oiint and \oiiint. Inadequately solved, please consider extra font support for these symbols (they are not part of the fonts distributed by the MathJax folks -- please talk to them for other font symbols.)
  6. Hack to support \color commands. Not needed anymore since upcoming MathJax 2.0 provides an extension with TeX \color support.
  7. Custom rendering settings, among others, to render \text{} content in ordinary HTML font. See User:Nageh/mathJax.js.
  8. Support for user customization.
  9. Support for user-provided macros overriding macros defined in extension. See the Register.StartupHook() call in mathJax.Config of User:Nageh/mathJax.js.
  10. Support for wikEd and ajaxPreview. See mathJax.Init() of User:Nageh/mathJax.js.
  11. Conditional loading of MathJax only when maths element are present.
  12. Other initial fixes, which are integrated/addressed in upcoming MathJax 2.0 now.

I think this is all. Hope that helps, and thanks for your upcoming efforts! Nageh (talk) 13:31, 29 November 2011 (UTC)[reply]

What to watch out for:
MathJax does not implement that complex macro vs. primitive processing of TeX. This means that MathJax will parse arguments always in a "greedy" way, i.e., it does not consider that some macros require more than one parameter. Note that this also creates incompatibilities in regular TeX when packages override primitives or macros with other macros, but importantly it means that there will be some statements that render fine with the current MediaWiki renderer but require additional braces to delimit parameters with MathJax. See User_talk:Nageh/mathJax#underline and mathJax#The_appearance_of_certain_fractions for examples. See User_talk:Nageh/mathJax for a list of issues to came up during usage of my extension. Nageh (talk) 13:41, 29 November 2011 (UTC)[reply]

Awesome, that's lots of useful stuff! :D I've made an initial stab at integrating the MathJax source into our Math extension and having it auto-launch with some of your customizations & initialization code. I should be able to tweak that further so it replaces the default image forms as well as the text format; and also of course adding a switch in prefs so you can disable/enable it individually. (Probably just a 3-way selector between PNG, source, and MathJax, where the MathJax initially loads images and replaces them for highest compatibility?) --brion (talk) 01:22, 6 December 2011 (UTC)[reply]
One realization could be to have two options for "TeX only" and "PNG rendering", and an extra checkbox to override the maths with MathJax rendering. Not sure how displaying images before MathJax rendering works in practice: it could be that the images are already replaced before the typesetting step, in which case downloading images would be pointless. Also, note that MathJax's MathML output rendering, which I usually use these days on Firefox, is fast enough to not warrant the downloading of images. I'd say, play with it. Nageh (talk) 15:56, 6 December 2011 (UTC)[reply]
Responding to RFCs

Remember that RFCs are part of Dispute Resolution and at times may take place in a heated environment. Please take a look at the relevant RFC page before responding and be sure that you are willing and able to enter that environment and contribute to making the discussion a calm and productive one focussed on the content issue at hand. See also Wikipedia:Requests for comment#Suggestions for responding.

Greetings! You have been randomly selected to receive an invitation to participate in the request for comment on Wikipedia talk:Citing sources. Should you wish to respond to the invitation, your contribution to this discussion will be very much appreciated! However, please note that your input will carry no greater weight than anyone else's: remember that an RFC aims to reach a reasoned consensus position, and is not a vote. In support of that, your contribution should focus on thoughtful evaluation of the issues and available evidence, and provide further relevant evidence if possible.

You have received this notice because your name is on Wikipedia:Feedback request service. If you do not wish to receive these types of notices, please remove your name from that page. RFC bot (talk) 16:17, 4 December 2011 (UTC)[reply]

NON-IMPLEMENTED INTEGRALS \oiint + \oiiint

Hello. I notice you are a prominant editor on the help:displaying a formula talk page so I have repeated an entry here to bring it to attention. It is a proposed workaround for closed double and triple integrals not yet possible with the current LaTeX on Wiki. Please read it when you have time and see what you think. Its only a suggestion - i'm not forcefully trying to spread this on wikipedia.

_____________________________________

I have a workaround to propose for creating integrals requiring \oiint and \oiiint.

  • Use these image (without the thumbnails), without chainging the size, in front of a the integrand expression.
  • In the actual LaTeX syntax, type {\frac{}{}}_{REGION} at the very start, in which REGION can be replaced by any symbol representing the closed surface or volume of the region in question.

Here are some random examples where it might be used (if in the situation these are applied to, a closed volume is needed):

  • No clue:
"32px"
"32px" "28px"
"28px"
"32px" "28px"

Just a suggestion.

_____________________________________

--F=q(E+v^B) (talk) 18:45, 12 December 2011 (UTC)[reply]

Replied at the help talk page. Nageh (talk) 20:24, 12 December 2011 (UTC)[reply]

Please comment on Talk:Ariel A. Roth

Greetings! You have been randomly selected to receive an invitation to participate in the request for comment on Talk:Ariel A. Roth. Should you wish to respond to the invitation, your contribution to this discussion will be very much appreciated! If in doubt, please see suggestions for responding. If you do not wish to receive these types of notices, please remove your name from Wikipedia:Feedback request service.RFC bot (talk) 20:16, 28 December 2011 (UTC)[reply]

5G

Hello Nageh, I start apologising for any eventual error I am doing, I am new as a Wikipedia contributor. I put a deletion flag on a wikipedia voice (5G), because I find it very bad, very partisan and strongly speculative. Other users seem thinking the same, and I ask to myself why did you delete it. Maybe I have done an error putting it into the voice itself? Thank you for any help you can provide. 3enix. — Preceding unsigned comment added by 3enix (talkcontribs) 15:39, 6 January 2012 (UTC)[reply]

You did nothing wrong in principle, and I agree with you that the article is highly speculative, probably to the point exceeding what is generally accepted on Wikipedia, but the talk page of the article showed no consensus on deletion of the article. As such, deletion of this article should go through a discussion, as I indicated in my edit summary. The easiest way to do this is to use Twinkle, which you can select in "My preferences", and then tag the article with XFD (Anything for deletion) of the TW (Twinkle) tab in the upper right corner. If you cannot figure it out I can put up the article for deletion on behalf of you. Hope that helps, and all the best with your future editing!
PS: I'll post a few links on your talk page that should help you get acquainted with our core editing policies and guidelines. Also note that you should always sign your posts using four tildes (~~~~). Nageh (talk) 16:31, 6 January 2012 (UTC)[reply]

Berger code

How does the Berger code correct errors?

Perhaps Talk:Error detection and correction#Berger code is a better place to discuss this question. I suspect I am misunderstanding something you mentioned years ago.[6] --DavidCary (talk) 03:19, 9 January 2012 (UTC)[reply]

Computer hardware

Please understand the difference between the general term hardware that – when it comes to electronics and computers – may refer to any electronic circuit, such as single-purpose circuits designed to fulfill one particular job, and between computer hardware, which is hardware that is part of a computer, a general-purpose (or special- but multiple-purpose) device that can be custom-programmed to fulfill different jobs. Do not simply change all instances of hardware to computer hardware. Thanks. Nageh (talk) 22:04, 19 January 2012 (UTC)

Hi, what you are describing sounds to me like a Special purpose computerSpecial purpose computer now redirects to microcontroller. So, technically some might still consider it computer hardware, but not a general purpose computer (which redirects to computer). I want to avoid links to hardware, which per WP:OVERLINK is so general as not to be a useful link, i.e. "What aisle of the hardware store has hardware supporting the Advanced Encryption Standard?." In other words, while one option would be to not link hardware at all, better would be a more specific link, and even more specific than computer hardware would be even better. What would work best for these type of articles: microcontroller, digital circuit or something else? Or we could say microcontrollers or general purpose computer hardware to cover them both. Thanks. Wbm1058 (talk) 00:05, 20 January 2012 (UTC)[reply]
No. A microcontroller is a general-purpose computer, that redirect is incorrect. An example for a special-purpose computer is a GPU. Compare that to a GPGPU, which is a generally programmable GPU. I understand that linking to hardware is not quite helpful; probably, linking to electronic circuit is the best solution when no computer component is implied. Nageh (talk) 00:39, 20 January 2012 (UTC)[reply]
Thanks, one thing I like about editing Wikipedia is that I learn stuff while I'm doing it. The distinction between special-purpose and general-purpose graphics hardware is very helpful. I just observed that while special purpose computer redirects to microcontroller, special-purpose computer redirects to embedded system. It's not the first time I've run across redirects with cosmetic differences that went to different articles. For consistency, these should both go to the same place. Neither microcontroller or embedded system defines special-purpose computer. Seems like embedded system is better, at least the last paragraph of the lede discusses the issue:
In general, "embedded system" is not a strictly definable term, as most systems have some element of extensibility or programmability. For example, handheld computers share some elements with embedded systems such as the operating systems and microprocessors that power them, but they allow different applications to be loaded and peripherals to be connected. Moreover, even systems that do not expose programmability as a primary feature generally need to support software updates. On a continuum from "general purpose" to "embedded", large application systems will have subcomponents at most points even if the system as a whole is "designed to perform one or a few dedicated functions", and is thus appropriate to call "embedded".
Should special-purpose computer redirect to embedded system (and get defined in that article), or should a new article be created, with special-purpose computer and general-purpose computer sharing the same article? General-purpose computer just redirects to computer, where the term is mentioned multiple times, though perhaps not fully defined and contrasted with special-purpose. Maybe if there's not enough content to justify a full article, a new section in computer, General- and special-purpose computers? Wbm1058 (talk) 21:55, 20 January 2012 (UTC)[reply]
Hm. On second thought, microcontrollers are border cases: while they are generally programmable they are used for special purpose applications. Nonetheless, the terms are probably best redirected to and explained in the computer article. While they could possibly be introduced somewhere in the discussion of the History section, as early designs such as the ABC were clearly special-purpose, the issue is probably best brought up at the article's talk page. Nageh (talk) 23:20, 20 January 2012 (UTC)[reply]
Good idea. Discussion now at Talk:Computer#Definition of computer. Feel free to add comments. Wbm1058 (talk) 02:56, 21 January 2012 (UTC)[reply]
I will add my thoughts in an ongoing discussion. Note that I have moved your section down to the bottom of the page. It is generally a good idea to not intersperse new comments within older sections as people will miss them. Nageh (talk) 13:40, 21 January 2012 (UTC)[reply]