This was mainly a bug-fix release.
The major problem was that in new configurations it would not set the forced builtins correctly internally. The effect was that no builtins would appear in code-completion or when doing code-analysis.
Aside from that, a patch provided by Carl Robinson allows users to config PyLint severities.
Friday, June 29, 2007
Wednesday, June 27, 2007
Pydev and JDT (SDK not required anymore)
I forgot to mention about it... Pydev 1.3.5 does not require JDT to work anymore (it's set as an optional dependency, only needed for jython development), so, users that are interested only in python can download the Eclipse Platform Runtime Binary instead of the whole SDK.
There's only one minor problem with that: the platform does not include the Error Log (which is usually useful when something goes wrong for bug reports) -- so, if you're using only the runtime and not the whole SDK, you need to look through the .metadata/.log file for those errors.
Another option is getting only the plugin that adds the error log view: org.eclipse.pde.runtime_XXX.jar -- it's pretty small (147kb here), but it needs to be gotten in the SDK.
There's only one minor problem with that: the platform does not include the Error Log (which is usually useful when something goes wrong for bug reports) -- so, if you're using only the runtime and not the whole SDK, you need to look through the .metadata/.log file for those errors.
Another option is getting only the plugin that adds the error log view: org.eclipse.pde.runtime_XXX.jar -- it's pretty small (147kb here), but it needs to be gotten in the SDK.
Friday, June 22, 2007
Noteworthy things about Pydev 1.3.5
Ok, the new version was released yesterday, and there are a couple of things worth mentioning:
Facelifts
wxPython debugging also works again... it was not working correctly in the last release because pydev was keeping the latest frame from a thread alive a bit more than it should (and wxPython didn't like it). That was needed because pydev currently runs with several untraced frames, but to make them traceable again, pydev needs some way to access them (so, it kept the topmost frame available in the thread)... that access is still needed, so, 2 workarounds for that are now in place to avoid that strong reference to the topmost frame:
Facelifts
- Docstrings and the pop-up window: the docs are now correctly wrapped, those whitespace columns to the left are removed and the size is kept among code-completion requests.
- Outline: the comments handling is much better (it respects the level they should appear in, finds comments ending with '---' and sorts them by their position, even when in alphabetic order is chosen)
wxPython debugging also works again... it was not working correctly in the last release because pydev was keeping the latest frame from a thread alive a bit more than it should (and wxPython didn't like it). That was needed because pydev currently runs with several untraced frames, but to make them traceable again, pydev needs some way to access them (so, it kept the topmost frame available in the thread)... that access is still needed, so, 2 workarounds for that are now in place to avoid that strong reference to the topmost frame:
- python 2.5 added support to get the running frames for all the threads in sys._current_frames(), so, this was a piece of cake (after I discovered about it).
- Earlier versions need a different approach: a list of weak-refs to the PyDBFrame is kept (that's the class that wraps the tracing facility for a given frame) and that class has the actual frames as a strong ref (because you can't create actual weak-refs to frames)... seems like a hack for me, but the only other option required to have a compiled library to get the frames as python 2.5 does, but I didn't want to add the burden of having compiled code for different platforms (and that also didn't consider jython)
Saturday, June 16, 2007
Why can't the pydev debugger work with turbogears?
Ok, there's a problem that can't really be overcome in the pydev debugger when using turbogears... (just doing "import turbogears" would already break it).
Actually, no OPTIMIZED debugger would be able to work with that. I'm saying optimized because the implementation seems to take into account naive debuggers which would trace all the calls within all the frames (pydev only traces frames with breakpoints).
The problem is: there's a module that turbogears uses (in my tests: DecoratorTools-1.4-py2.5.egg) which has a decorator named: decorate_assignment. This decorator uses the tracing facility that python provides for debuggers and removes the current debugger tracer function. It still tries to restore it if it was tracing the frame previously (but that would hardly ever happen in an optimized debugger).
So, there's no way to actually fix that from pydev, but there are some options to make it work:
1. Using the pydev extensions remote debugger (but if that decorator is called after the remote debugger is set, the debugger would stop working again, so, this option would only useful if that decorator is not used later).
2. Removing that decorator from the places that use it in turbogears (the implications for that would have to be checked).
3. Hard-coding it to return the pydev tracing function. To do that, the file: DecoratorTools-1.4-py2.5.egg\peak\util\decorators.py must be changed so that the function "def decorate_assignment(callback, depth=2, frame=None):" does not use the call:
"oldtrace = [frame.f_trace]"
and uses the code below instead:
oldtrace = None
try:
import pydevd
debugger=pydevd.GetGlobalDebugger()
if debugger is not None:
oldtrace = [debugger.trace_dispatch]
except ImportError:
pass
if oldtrace is None:
oldtrace = [frame.f_trace]
The 3rd option is probably the easier in the short run for those wanting to debug turbogears in pydev, but I think that the 2nd should be the one actually used (as a general rule, I believe that only debuggers should play with the tracing facility, because it tends to bee way to instrusive, and it's probably the most un-optimized way of doing something, as you're going to trace all that happens, which can lead to a large overhead).
Actually, no OPTIMIZED debugger would be able to work with that. I'm saying optimized because the implementation seems to take into account naive debuggers which would trace all the calls within all the frames (pydev only traces frames with breakpoints).
The problem is: there's a module that turbogears uses (in my tests: DecoratorTools-1.4-py2.5.egg) which has a decorator named: decorate_assignment. This decorator uses the tracing facility that python provides for debuggers and removes the current debugger tracer function. It still tries to restore it if it was tracing the frame previously (but that would hardly ever happen in an optimized debugger).
So, there's no way to actually fix that from pydev, but there are some options to make it work:
1. Using the pydev extensions remote debugger (but if that decorator is called after the remote debugger is set, the debugger would stop working again, so, this option would only useful if that decorator is not used later).
2. Removing that decorator from the places that use it in turbogears (the implications for that would have to be checked).
3. Hard-coding it to return the pydev tracing function. To do that, the file: DecoratorTools-1.4-py2.5.egg\peak\util\decorators.py must be changed so that the function "def decorate_assignment(callback, depth=2, frame=None):" does not use the call:
"oldtrace = [frame.f_trace]"
and uses the code below instead:
oldtrace = None
try:
import pydevd
debugger=pydevd.GetGlobalDebugger()
if debugger is not None:
oldtrace = [debugger.trace_dispatch]
except ImportError:
pass
if oldtrace is None:
oldtrace = [frame.f_trace]
The 3rd option is probably the easier in the short run for those wanting to debug turbogears in pydev, but I think that the 2nd should be the one actually used (as a general rule, I believe that only debuggers should play with the tracing facility, because it tends to bee way to instrusive, and it's probably the most un-optimized way of doing something, as you're going to trace all that happens, which can lead to a large overhead).
Thursday, June 14, 2007
Working offline in the pydev source
Ok, after quite some time being really annoyed at not being able to commit when I want to the cvs at sourceforge, and sync operations taking almost forever sometimes (yeap, I double-check everything before commiting), I've decided to take a look at alternative options.
Basically, descentralized scm systems seem the way to go, so, after taking a (rather quick) look at some of the alternatives (which were: mercurial, git and bzr), mercurial apparently seems what'll go better with how I want to work.
It seems to be able to coexist nicely with the base in place -- that's a must because I'll have to keep commiting things to the cvs at sourceforge. And it's pretty non-obtrusive (it doesn't keep zilions of files around each folder as svn and cvs do. All is kept under a single folder, out of the way of the actual projects -- and having a single .hgignore file instead of one for each folder is also pretty nice)
All my projects are in a X: drive (when in windows), so, basically, I've simply created a mercurial repository at X: and imported all the pydev stuff into it.
I've just played with it for about 1-2 hours, and I already feel I cannot live without it anymore ;-)
I'll probably change my modus-operandi to make all things in my machine, with patches and diffs (BTW: I'm using KDiff3 which integrates nicely with Mercurial for that) and just commiting everything to sourceforge at once, without having to double check all those things...
But the most important thing is: I feel really relieved of not having to make those syncs that took forever to see if everything is correct when commiting to sourceforge (and working offline is a big win too).
The major drawback is that the integration with Eclipse is still in its early stages (in fact, I'll probably be using the command line and diffs with KDiff3 until it matures -- which I hope will not take long) -- but that's still a minor thing when compared with the advantages.
Basically, descentralized scm systems seem the way to go, so, after taking a (rather quick) look at some of the alternatives (which were: mercurial, git and bzr), mercurial apparently seems what'll go better with how I want to work.
It seems to be able to coexist nicely with the base in place -- that's a must because I'll have to keep commiting things to the cvs at sourceforge. And it's pretty non-obtrusive (it doesn't keep zilions of files around each folder as svn and cvs do. All is kept under a single folder, out of the way of the actual projects -- and having a single .hgignore file instead of one for each folder is also pretty nice)
All my projects are in a X: drive (when in windows), so, basically, I've simply created a mercurial repository at X: and imported all the pydev stuff into it.
I've just played with it for about 1-2 hours, and I already feel I cannot live without it anymore ;-)
I'll probably change my modus-operandi to make all things in my machine, with patches and diffs (BTW: I'm using KDiff3 which integrates nicely with Mercurial for that) and just commiting everything to sourceforge at once, without having to double check all those things...
But the most important thing is: I feel really relieved of not having to make those syncs that took forever to see if everything is correct when commiting to sourceforge (and working offline is a big win too).
The major drawback is that the integration with Eclipse is still in its early stages (in fact, I'll probably be using the command line and diffs with KDiff3 until it matures -- which I hope will not take long) -- but that's still a minor thing when compared with the advantages.