Forum Replies Created

Viewing 15 replies - 1 through 15 (of 44 total)
  • Thread Starter Arturo emilio

    (@anduriell)

    I get this errors when trying to enable jetpack

    Notice: Undefined index: message in /arturoemilio.es/wp-content/plugins/jetpack/_inc/lib/debugger/class-jetpack-debugger.php on line 76
    Notice: Undefined index: resolution in /arturoemilio.es/wp-content/plugins/jetpack/_inc/lib/debugger/class-jetpack-debugger.php on line 79

    In health check i get this error:

    Communicating with the WordPress servers is used to check for new versions, and to both install and update WordPress core, themes or plugins.
    Error Your site is unable to reach WordPress.org at 198.143.164.251, and returned the error: cURL error 77:

    But if i try to connect to those services using openssl or cURL php script everything works fine.
    I just don’t get why i can’t connect to wordpress anymore.

    there are no ads. If you can’t see them (without an ADblock) then there aren’t any.
    the issue is on your friends computers.

    Thread Starter Arturo emilio

    (@anduriell)

    Fantastic thank you.

    Thread Starter Arturo emilio

    (@anduriell)

    Never mind, was a fault from my part with some functions i had inside the theme.

    Thread Starter Arturo emilio

    (@anduriell)

    Permalinks worked fine, just the sitemap did not work. As i say it doesn’t matter because the static file is good enough for me.

    Did you try your sitemap with nginx and dynamic generation?
    Is it working fine?

    Thread Starter Arturo emilio

    (@anduriell)

    Yeah yes, i do have that line before everything otherwise the pretty urls don’t work.

    For example http://www.arturoemilio.es/ideas-y-proyectos/ would give a 404 page.

    Thread Starter Arturo emilio

    (@anduriell)

    Thank you for the answer. I didn’t see the dynamic sitemap option, i thought it was always dynamic. With the static option my problem is solved.

    Strange, looks like does not redirect properly with wordpress 4.0 and Nginx. I had to use this rules in my virtual_server.conf

    location ~ (sitemap.xml){
    		try_files $uri $uri/ /index.php?aiosp_sitemap_path=root;
    	}
        location ~ (sitemap.gz){
    		try_files $uri $uri/ /index.php?gzipped=1&aiosp_sitemap_path=root.gz;
    	}

    That although wouldn’t work with indexes, at least would show the sitemap. If i activate the dynamic and don’t set up those rules, it would show an 404 page.
    The rest of the site works properly so i think is not about nginx configuration files.
    I think this ussue is being resolved. Thank you.

    Thread Starter Arturo emilio

    (@anduriell)

    Yeap, the nginx configuration for the sitemap is inexistent. I have had to fiddle inside the code to get to know how to configure the virtual server to allow to handle request like http.//www.arturoemilio.es/sitemap.xml and show the actual sitemap.
    Otherwise it will show an 404 page (not found) .

    Would be nice if the sitemap also created an nginx.conf with the appropriate rules or give them in the sitemap page to write them down in the virtual server file. Right now it only do that for Htaccess files, that do not work with nginx.

    Thread Starter Arturo emilio

    (@anduriell)

    I’m sorry, i’ve been off line those days, but now i’m back.
    Yes there is a problem with the encoding and DOM and the libxml complaining but i found this most useful:
    When loading the DOM using the encoding will make things easier:

    $dom = new DOMdocument("1.0", "utf8");
    	$dom->preserveWhiteSpace = false;
    	$dom->formatOutput       = true;
    	@$dom->loadHTML('<meta http-equiv="content-type" content="text/html; charset=utf-8">'.$buffer);

    At the end is enough using the $buffer = $dom->saveHTML(); .

    This is important because if it is not done the page will get messed up. For example my blog is in Spanish so I’m using UTF-8 encoding in the pages. For the translation plugin i’m developing it has not only to encode the page in latin but also in utf8 or even should accept not latin characters. In my blog you can see how works just choosing another language in the combo box, still is not perfect and have some issues with not latin characters like Chinese or thai but mostly works fine. The translation is done using DOM as well.

    To look for the attributes you could go straight for it instead of going recursively attribute by attribute:

    foreach ($scripts as $script) {
    if ($script->hasAttribute('src')){
    	$dom_javascript["external"][]=$script->getAttribute('src');
    }else{
             dom_javascript["inline"][]=$script->nodeValue;
    }

    I don’t know if at this point you should check if the script is from the same domain or not.

    Thread Starter Arturo emilio

    (@anduriell)

    Hi futta!

    • About md5() i’ve encounter this behavior with a plugin that makes CSS Sprites from the images in your WordPress blog. Although they were the same images and the process were the same all the time, from time to time the hashed changed to it has to make a new sprite. I don’t know why but sha1() seems to solve that problem. Also this is a CMS that does the same all the time if the variables weren’t changed.
      Why should the same static content suddenly change to give a new hash is the content from the files remain the same?
    • About using the full url for that file instead the content. Those files are not supposed to be changed, they are static. If the developer change those files should be aware to refresh the cache.
    • And about Xpath.. In this case we are looking for script/css tags. that shouldn’t be broken or it wouldn’t work anyway. It works much faster than regex and load much less the cpu.

    But this is only my point of view. I could be wrong anyway.

    Thread Starter Arturo emilio

    (@anduriell)

    Hi! Actually you are right! I didn’t think it was that because as you can see in the second link, although it used all the small flags pngs it did not take that long. After doing some adjustments to the plugin (traductor is a plugin i’m developing that translates the wordpress blog using google translator the same way as Transposh does, but server sided so it’s without all the javascript overhead that messes up any theme and cache plugin and without the need to have an google apps account, so for free)
    now the page loads at the correct speed 1 to 2.5 second:
    http://gtmetrix.com/reports/www.arturoemilio.es/Aa0sepZ0

    However the plugin seems to have some trouble when there are a lots o small images? The speed problem was a constant when the plugin was active, if you want to check it out, i’m using this Javascript script:
    https://github.com/marghoobsuleman/ms-Dropdown
    To reproduce the behavior put diferents flags in the data-image attribute from the img.

    Regards and thank you for pointing that!

    Forum: Fixing WordPress
    In reply to: Weird permalinks

    in your html you can see this tag:
    <!– WP Super Cache is installed but broken. The constant WPCACHEHOME must be set in the file wp-config.php and point at the WP Super Cache plugin directory. –>

    Please be sure everything is working properly, in this case maybe SuperCache is messing with your cached pages, giving the wrong ones.

    Regards

    Thread Starter Arturo emilio

    (@anduriell)

    Ok thanks!. The spanish translation is a little bit missleading then. Instead being:
    “Un camino por línea, debe ser dentro de /home2/arturo/public_html/. Utilice caminos absolutos, no caminos relativos.”
    Would be better something like:
    “Un camino por línea, debe ser dentro de /home2/arturo/public_html/. Utilice la ruta absoluta incluyendo /home2/arturo/public_html/ en la misma”

    In your text your are asking for the absolute path but does not say if is from the web directory (/home2/arturo/public_html/) or should i put it from the home directory.
    Or instead that, maybe a message in debug console about the directory being wrong or something (nothing did come up then).
    Thanks again.

    +1 here

    Thread Starter Arturo emilio

    (@anduriell)

    That sounds good enough!

    Actually is really difficult to avoid being extracted by a crawler but i can be achieved.
    There are some pages that are really protected against crawling but i can’t say how they do it and how to avoid that protection, also i didn’t have time to look into it.

    Best regards!

Viewing 15 replies - 1 through 15 (of 44 total)