Search Documents
Search Again
Search Again
Refine Search
Refine Search
-
Rock Mechanics - Drilling and Blasting at Smallwood MineBy A. Bauer, P. Calder, N. H. Carr, G. R. Harris
Since both rotary and jet piercing drills are used by the Iron Ore Co. at Smallwood, it is often desirable in planning to know in which regions of the orebody or new orebodies a particular drill will be the most economic. This makes it necessary to establish a correlation between drillability and pierceability and some physical rock properties. For rotary drills a good correlation was found with penetration rate and grinding factor index. The jet piercers were found to have a reciprocal relationship in the sense that the best rotary ground was the worst jet ground and vice versa. It is also indicated how an economic comparison could be made using these penetration rate versus grinding factor index curves, the hole size distribution curves for single pass and chambered holes and the mine distribution curve for grinding factor index. A discussion is presented on the fuel oxygen ratios to be used in jet piercing and on the site gas sampling and analysis which has been used to set up the drills. The fuel has been cut back so that stoichio-metric conditions exist, carbon monoxide is drastically reduced and pop-up or exploding holes eliminated. No decrease in penetration rate has been observed contrary to the published results of previous workers. The blasting procedure and results at Smallwood are discussed and the operation of Iron Ore Co.'s slurry pump-mix truck is also described briefly. Smallwood mine is part of the Iron Ore Co.'s Carol Lake operation and is situated in Labrador, 240 miles north of Sept-Iles, Quebec. Last year 15 million tons of crude ore were crushed to yield 6.3 million tons of concentrate and pellets. This year the figures will be 17 million tons of crude and 7% million tons of concentrate and pellets which is the full plant capacity. Carol Lake ores consist primarily of specularite and magnetite mixed with quartz. For convenience the ore has been split-into the following classifications depending on the percentage of magnetics in the sample, shown in brackets: specularite (0 to 10%), specularite-magnetite (10 to 20%), magnetite- specularite (20 to 30%), magnetite (>30%). The order of classification also represents the order of increasing grinding difficulty - the specularite generally being the easiest and the magnetite the hardest. The orebody also contains a small percentage of waste materials consisting of limonite carbonate, quartz carbonate and quartz magnetite. The first two materials are among the softest in the mine, generally softer than the specularite, and the quartz magnetite is amongst the hardest. The bulk of the material in the mine is of the specularite-magnetite and magnetite-specularite classifications. As a result of test drilling at Smallwood in 1960 with rotary, jet and percussion drills, the Iron Ore Co. purchased four JPM-4 jet piercers for the bulk of production drilling and set up an oxygen plant to supply 20 tons of oxygen per day. This oxygen is sufficient for two machines operating full time and one part time. In addition, there are two 50-R, one 60-R and one 40-R machines in use. The benches are 45 ft high and 50 ft holes are generally drilled. JET DRILLING At the onset of jet drilling in the late fall of 1962, two major problems were encountered: 1) freezing due to winter operations; experience and the use of heat at more places, such as the rotary head, has eliminated this,'" and 2) exploding or "popping" drilled holes; this happened frequently (several holes "popping" each day) and was the cause of two lost time accidents. In one instance a hole was being measured with a tape which fell down the hole causing it to "pop." Safety glasses though pulverized saved the wearer's eyesight. Various methods were then employed to detonate the holes before measuring or loading (dropping lighted rags of fusees down, or sparking across a spark gap). These methods were time consuming and far from completely successful. Consideration was given to the fuel oxygen ratio on the machines and what this would produce in the way of product gases. A fuel oxygen weight ratio of 0.35 which was quite oxygen negative was being used. Theoretically appreciable carbon monoxide would be produced at this fuel oxygen ratio. On the close down procedure of the jet which calls for low oxygen after flame out, oxygen would be left in the hole along with this carbon monoxide. This is an explosive mixture. The fuel oxygen ratio was cut back to stoichiometric
Jan 1, 1967
-
Institute of Metals Division - Effect of Strain on Diffusion in MetalsBy J. Philibert, A. G. Guy
Diffusion in the presence of deformation was studied by the method of vacuum dezincification of copper-rich and silver-rich solid solutions containing 7 to 30 pct Zn. The specimens were designed to permit the study of diffusion in separate portions of a given specimen characterized by strain rates ranging from essentially zero to approximately 10 sec-. No effect of deformation on diffusion was observed. BEGINNING with the work of Buffington and Cohen: interest in the question of the effect of stress or strain on diffusion has largely been concentrated on the enhancement of diffusion in specimens subjected to Continuous plastic deformation. The present research is a contribution to this limited area. However, as a preliminary to focusing attention on this special topic, it will be desirable to make a broad survey of the larger question, especially since there has been considerable foreign work in areas outside those of current interest in the United States. Since most of the topics referred to in the following section are both complex and imperfectly understood at present, it has been expedient in most instances to offer only a guide to the general nature of the work rather than a critical evaluation. PREVIOUS WORK The effect of elastic stress on diffusion has received considerable attention, especially with regard to the thermodynamic driving force for diffusion. The thermodynamic treatments have been based on the work of Gibb, Voigt, Planck, and Leontovich.' Konobeevskii and Selisski6 made a first attempt at treating the problem in 1933, and Gorskii7 a few years later gave a solution applicable to single crystals as well as to polycrystalline specimens. In 1943 Konobeevski8 published treatments that have been the basis of much Russian work up to the present. For example, Aleksandrov and Lyubov used his work in explaining the velocity of lateral growth of pearlite. Early work in the United States was that of Mooradian and Norton, which showed that lattice distortion tends to be relieved before it can significantly affect the diffusion process. Druyvesteyn and Berghoutl1 observed a slight effect of elastic strain on self-diffusion in copper, while de Kazinczy12 found that both elastic and plastic deformation increased the rate of diffusion of hydrogen in steel. On the other hand, Grimes58 observed no effect of either elastic or plastic straining on the diffusion of hydrogen in nickel. High-frequency alternating stresses have been reported by various investigator s13-l5 to increase the rate of diffusion. A special form of elastic stressing is the imposition of hydrostatic pressure, a condition that is amenable to Conventional thermodvnamic analysis. Most of the experimental results in this area are consistent in showing a slight decrease in diffusion rates at high pressures.16-l8 Although Geguzinl reported a pronounced effect of relatively small pressures, Barnes and Mazey20 failed to Corroborate this finding, while Guy and Spinelli21 advanced an explanation of the phenomenon observed by Geguzin. It has been recognized that the thermodynamic treatment of diffusion phenomena in an arbitrarily stressed body is complicated by the fact that the desired state of quasi-equilibrium of the shear stresses cannot be maintained during a general diffusion process. However, attempts have been made by Meix-ner22-24 and Fasto to treat certain restricted cases, such as relaxation. FastovZ7 has also incorporated the general stress tensor into the thermodynamics of irreversible processes. The lattice strain that accompanies the formation of a solid solution has been the subject of much study,28-s0 and indirectly it has entered into many recent theories of diffusion. However, some Russian investigators31'32 have taken other views of this matter and have predicted large effects on diffusion rates because of concentration stresses.o In completing this brief resume of previous work involving elastic strains and before proceeding to a consideration of the effect of continuous plastic deformation, it should be pointed out that deformation of various additional types may also influence diffusion. The effect of cold-working on subsequent diffusion has been studied directly by AndreevaS and by Schumann and Erdmann-Jesnitzer, while indirect evidence has been obtained by Miller and Guarnieri and by Vitman.38 Thermal stresses may also influence diffusion, contributions to this subject having been made by Fastovs7 and by Aleksandrov and Lyubv. The work of Johnson and Martin,o Dienes and Damask,3Band DamaskS considered the question of radiation-enhanced diffusion. In considering previous work on the subject of plastic deformation and diffusion, attention will be directed to those studies concerned primarily with diffusion rather than with its relation to Creep, e.g., the work of Dorn, or to the acceleration of diffusion -controlled reactions. Observations of the effect of
Jan 1, 1962
-
Part XI – November 1969 - Papers - High-Temperature Creep of Some Dilute Copper Silicon AlloysBy C. R. Barrett, N. N. Singh Deo
The high-temperature steady-state creep behavior of a series of dilute copper-silicon alloys was studied to determine the effect of stacking fault energy on the creep-rate. The steady-state creep rate is, when taken at equivalent diffusivities decreases with decreasing stacking fault energy. The stress and temperature dependencies of is suggest that creep is a difusion controlled dislocation climb process. Electron microscopy studies of the creep substructure revealed: 1) the subgrain size is not a function of the stacking fault energy in these alloys, 2) the dislocation density not attributed to the subgrain walls seems to be higher during primary creep and decreases to a lower steady value during steady-state creep, and 3) the dislocation density during steady-state creep decreases with decreasing stacking fault energy. In the past few years numerous investigators have studied the influence of stacking fault energy on high-temperature creep strength. Most of these investigators have confined their attentions to studying the relationship between steady-state creep rate, is, and stacking fault energy, ?, when samples are tested under conditions of comparable stress and temperature. For the case of fcc metals, it was initially shown by Barrett and Sherbyl and since confirmed by many others2"4 that is decreases with decreasing ?, often following an empirical relation of the form i ?m where m is a constant about equal to 3. The application of theory to explain this observation has not been entirely successful. One of the main difficulties has been the almost complete lack of structural information (dislocation density, subgrain size, and so forth) for samples with different stacking fault energies, tested under high-temperature creep conditions. weertman5 has attempted to explain the stacking fault energy dependence of is on the basis of a dislocation climb mechanism. Assuming that both the rate of dislocation core diffusion and the ease of athermal jog formation decreases as ? decreases Weertman has argued that the rate of dislocation climb and hence the creep rate should also decrease as ? decreases. One questionable aspect of Weertman's analysis is the assumption that core diffusion down extended dislocations is slower than core diffusion down unextended dislocations. The only experimental work done in this area, by Birnbaum et al.6 on nickel and Ni-60 Co, has shown the core diffusivity to increase with decreasing ?. Theories of steady-state creep based on the diffusive motion of jogged screw dislocations often seem unable to predict even the qualitative nature of the es- relationship. Assuming that Weertman is correct in his assumption that the dislocation jog density decreases with decreasing ? then the jogged screw theories predict an increasing dislocation velocity with lower ?. It is usually assumed that the increase in dislocation velocity implies a corresponding increase in creep rate. However, two other factors must be considered before such a statement can be made. That is, we must know how both the mobile dislocation density and the effective stress (the difference between applied stress and internal stress) vary with ?. Significant changes in either one of these factors could outweigh any change in dislocation velocity accompanying a change in ?. And with the slower rates of recovery expected in low stacking fault energy materials it seems likely to expect both mobile dislocation density and effective stress to be dependent on ?. Sherby and Burke7 have suggested that stacking fault energy influences the creep rate in an indirect way. These authors cite evidence that the steady-state subgrain size generated during high-temperature creep is a function of ? decreasing with decreasing ?. Assuming the creep rate to be proportional to the area swept out by each expanding dislocation loop and that subgrain boundaries are good barriers to dislocations, then the creep rate should be proportional to subgrain area, hence increasing as ? increases. A critical evaluation of any of the above theories requires more quantitative information concerning the dislocation substructure generated during high-temperature creep. Accordingly this investigation was undertaken with an aim of studying the influence of stacking fault energy on tbe steady-state creep characteristics of a series of dilute copper-silicon alloys. Special emphasis was placed on studying the strain dependence of both the dislocation configuration and density. MATERIALS AND PROCEDURE Dilute copper-silicon alloys of the compositions shown in Table I were tested in tension at constant stress. The relative stacking fault energy of these alloys has been determined and is shown in Table 11. An Andrade-Chalmers lever arm was used to maintain constant stress and testing was carried out in a water
Jan 1, 1970
-
Reservoir Engineering–General - Simultaneous Flow of Gas and Liquid as Encountered in Well TubingBy N. C. J. Ros
The paper deals with pressure gradients occurring in flowing and gas-lift wells, a knowledge of which can be applied to the determination of optimum flow-string dimensions and to the design of gas-lift installations. The study is based on a pressure-balance equation for the pressure gradient. It appears that a pressure-gradient correlation of general validity must essentially consist of two parts-—one part being a correlation for liquid hold-up and the other part being one for wall friction. Dimensional analysis indicates that both liquid hold-up and wall friction are related to nine dimensionless groups. It is shown that in the field of interest only four groups are really important. On the basis of these four groups a restricted experimental program could be selected that nevertheless covered practically all conditions encountered in oil wells. This experimental program has been carried out in a laboratory installation. Three essentially different flow regimes were found. The pressure gradients in these regious are presented in the form of a set of correlations. Comparison of these correlations with a few available oilfield data showed excellent agreement. INTRODUCTION Prediction of the pressure drop in the flow string of a well is a widely known problem in oilfield practice. Accurate data on the pressure gradient of a simultaneous flow of gas and liquid in a vertical pipe are especially useful for the determination of optimum flow-string dimensions. It is well known that with moderate gas and liquid flows such a vertical string acts as a "negative restriction". The pressure drop decreases (1) when the throughput through a given pipe increases, and (2) when at a given throughput the cross-sectional area is decreased. The reason is that, with increasing velocities, the flow becomes more agitated so that the gas slips relatively more slowly through the liquid. With the resulting increase in gas content in the string, the static head decreases. When the area becomes very small, however, the high velocities entail great wall friction, which causes an increase in pressure drop. For a given flow, therefore, minimal pressure drop is obtained by using a certain cross section. This means that, in principle, each well can be provided with an optimum flow string for minimum pressure drop and, hence, maximum possible production rate. The procedure for the selection of the optimum string has been discussed by Gilbert.' A necessary tool in the procedure, however, is accurate knowledge of the pressure gradient to be expected for various values of the governing variables. Another application of pressure-gradient data lies in the field of gas-lift practice: they provide a means of determining the optimum gas-injection rate, optimum injection pressure and optimum injection depth. Much work has already been done in the study of the pressure gradient of vertical gas-liquid flow. Poett-mann and Carpenter2 presented a pressure-gradient correlation based on measurements in wells. This correlation has been found to provide accurate predictions in high-pressure wells and in high-production wells for flow through both tubing and annuli.2-5 However, when their method is checked on low pressure-low production wells or on wells with viscous crudes, serious discrepancies are found. As we shall see in the next section, this is due to the fact that their correlation factor, representing all irreversible energy losses, is given as a function of only one correlation group. Some important variables, such as gas-liquid ratio and liquid viscosity, are not incorporated in this group so that their specific effects are not accounted for. To study also the mechanism of vertical gas-liquid flow outside the ranges covered by the Poettmann-Carpenter publication and extensions, a laboratory investigation has been carried out. This study is founded on a pressure-gradient equation that is based on a pressure balance. To reduce the number of test runs required, a dimensional analysis has been carried out, followed by a selection of relevant dimensionless groups. These groups guided a subsequent experimental study, and with their aid the experimental program could be minimized while still covering the majority of the situations encountered in oilfield practice. In this paper the choice of a formula for the pressure gradient is discussed first. This is followed by a brief description of the experimental setup. Subsequently, the dimensional analysis is discussed and the relevant dimensionless groups are selected, resulting in the experimental program required. The general relationships of pressure gradient and liquid hold-up are then described; various flow patterns and a certain flow instability (so-called "heading") are discussed and a set of correlations is presented which shows a good agreement with the measurements and a few available field
-
Logging and Log Interpretation - Prediction of the Efficiency of a Perforator Down-Hole Bases on Acoustic Logging InformationBy A. A. Venghiattis
A rational approach to the selection of the appropriate perforator to use in each specific zone of an oil well is presented. The criteria presently in use for this choice bear little resemblance with actual down-hole condilions. These environmental conditions affect the elastic properties of rocks. One of these elastic properties, acoustic velocity, is suggested as the leading parameter to adopt for the choice of a perforator because, being currently measured in the natural location of the formation, it takes into account all of the effects of compaction, saturation, temperature, etc., which are overlooked in the laboratory. Equations and curves in relation with this suggestion are given to allow the prediction of the depth of perforation of bullets and shaped charges when an acoustic log has been run in the zone to be perforated. INTRODUCTION When an oil company has to decide on the perforator to choose for a completion job, I wonder if it is really understood that, to date, there is no rational way of selecting the right perforator on the basis of what it will do down-hole. This situation stems from the fact that the many varieties of existing perforators, bullets or shaped charges, are promoted on the basis of their performance in the laboratory, but very little is said on how this performance will be affected by subsurface conditions such as the combination of high overburden pressure and high temperature, for example. The purpose of this paper is to show the limitations of the existing ways of evaluating the performance of perforators, to show that performances obtained in laboratories cannot be extended to down-hole conditions because the elastic properties of rocks are affected by these conditions and, finally, to suggest and justify the use of the acoustic velocity of rocks, as the parameter to utilize for the anticipation of the performance of a perforator in true down-hole environment. EVALUATING THE PERFORMANCE OF A PERFORATOR It is natural, of course, to judge the performance of a perforator from the size of the hole it makes in a predetermined target. Considering that the ultimate target for an oilwell perforator is the oil-bearing formation preceded in most cases by a layer of cement and by the wall of a steel casing, the difficulties begin with the choice of an adequate experimental target material. For obvious reasons of convenience, the first choice that came to the mind of perforator designers was mild steel. This is a reasonable choice for the comparison of two perforators in first approximation. Mild steel is commercially available in a rather consistent state and quality, and is comparatively inexpensive. The trouble with mild steel is that it represents a yardstick very much contracted; minute variations in depth of penetration or hole diameter and shape may be significant though difficult to measure. The penetration of projectiles in steel being a function of the Brinell hardness of the steel (Gabeaud, O'Neill, Grun-wood, Poboril, et al), it is often difficult to decide whether to attribute a small difference in penetration to a variation on the target hardness or to an actual variation on the efficiency of the projectile. Another target material which has been widely used for testing the efficiency of bullets or shaped charges in an effort to represent a formation—a mineral target as opposed to an all-steel target—is cement cast in steel containers. This type of target, although offering a larger scale for measuring penetrations, proved so unreliable because of its poor repeatability that it had to be abandoned by most designers. The drawbacks of these target materials, and particularly their complete lack of similarity with an oil-bearing formation, became so evident that a more realistic target arrangement was sought until a tacit agreement was reached between customers and designers of oilwell perforators on a testing target of the type shown on Fig. 1. This became almost a necessity about seven years ago because of the introduction of a new parameter in the evaluation of the efficiency of a perforator, the well flow index (WFI). The WFI is the ratio (under predetermined and constant conditions of ambiance, pressure and temperature) of the permeability to a ceitain grade of kerosene of the target core (usually Berea sandstone) after verforation. to its vermeabilitv before perforation. The value of this index ;or the present state if the perforation technique varies from 0 to 2.5, the good perforators presently available rating somewhere around 2.0 and the poor ones around 0.8, There is no doubt that, to date, the WFI type of test is by far the most significant one for comparing perforators. It is obvious that a demonstration of a perforator
-
Natural Gas Technology - A Method of Predicting the Availability of Natural Gas Based on Average Reservoir PerformanceBy Lee Hillard Meltzer, Ralph E. Davis
INTRODUCTION During the past few years emphasis has been placed upon methods of estimating the future expectancy of gas production from natural gas fields. Before technical methods were applied, the production expectancy over future years was based upon the knowledge of gas well behavior, learned through long experience and embedded in the "know-how" of men long in the gas producing business. It is doubtful that a technical study of future expectancy of a gas field or a group of fields was ever prepared for the preliminary planning of a natural gas pipe line system built prior to about five years ago. The decline in well production capacity was naturally recognized by all familiar with the business since its earliest beginnings more than 75 years ago. In 1953, the Bureau of Mines published Monograph Number 7, "Back-Pressure Data on Natural Gas Wells and Their Application to Production Practices," which gave to the industry the first technical analysis of the decline in production of individual gas wells. This method affords a means of estimating the future production in relation to decline in reservoir pressure. The demand for technical determination of expectancy of future gas productivity from fields or a group of fields led technical men to the application of the knowledge of well behavior to the problems. The decline in a well's ability to produce as pressures declined could be estimated by the use of the curve known as the "back-pressure potential curve" as developed by the Bureau of Mines. A field containing few, or even numerous, wells could be analyzed on the basis of the sum of potentials of all wells. In most studies of this nature, the problem is to estimate the rate of production that can be expected, not only from present wells but also, from wells that will in the future have to be drilled into the reservoir being studied. The "back-pressure potential" method requires that the following data be known or estimated: (1) Proved gas reserves. (2) Current shut-in pressures and rate at which shut-in pressures change with production. (3) Back pressure potential data on wells in the source of supply. (4) Ultimate number of wells which will supply gas, and their potential. (5) Limitations on productivity such as line pressures against which the wells will produce, friction drop in the producing string, and so forth. It is evident that the resulting estimate of gas available in each year for a future of say, 20 years, contains many uncertainties. While the method may have considerable merit for a field that is fully developed, it cannot be completely dependable in fields that are only partially developed. In such cases, some of the data upon which it is based can only be estimated or assumed. In the study of this problem during the past few years, a method has been developed which we believe has great merit, especially when applied to fields subject to substantial future drilling, and when applied to the study of fields which, on the average, appear to have characteristics similar, in general, to the average of the fields used in the development of the "yardstick" outlined herein. From an analysis of the production history of 49 reservoirs which are depleted, or nearly depleted, a curve has been constructed which shows the average performance of the reservoirs during the declining stages of production. When properly applied, this "average performance curve" can be used to determine the stage of depletion at which a reservoir or group of reservoirs will no longer be able to yield a given percentage of the original reserves. "AVAILABILITY" AND "AVAILABILITY STUDIES" The rate at which. a reservoir will yield its gas depends basically upon physical factors, such as the thickness and permeability of the sand, the effect of water drive, if any, and other conditions, and upon economic factors, such as the number of wells drilled. Within the ranges set by the physical conditions, a rate of delivery tends finally to become established. The rate (or range of rates) represents a balance between the interests of the operator, who desires the maximum return from his property and of the pipe line owner, who desires to maintain a firm supply for his market. This balance, which is influenced by the terms of the contract, determines the capacity which will be developed by the operator, and the time and rate at which the decline in production is permitted to occur. Thus the "availability" of gas
Jan 1, 1953
-
Producing-Equipment, Methods and Materials - Emulsion Control Using Electrical Stability PotentialBy J. U. Messenger
A technique is described whereby the resistance of an emudian to breaking can be quantitatively determined. Produced ailfield emulsions are usually the water-in-oil type and, accordingly, do not conduct an electrical current. However, there is a threshold of A-C voltage pressure above which an emulsion will break and current will flow. The more stable an emulsion, the higher the required voltage. A Fann Emulsion Tester, modified so that low voltages (0 to 10 v) can be accurately measured, is suitable. This technique has application in evaluating the effect of a demuksifier on the stability of an emulsion. Emulsions can, in essence, be titrated with demulsifiers by adding a quuntity of demulsifier, stirring, and measuring the voltage required to cause current to flow. Any synergistic effect of two or more materials added simultaneously can be followed accurately. A demulsifier that significantly lowers the threshold voltage (from 100 to 400 v to 0 to 10 v for the emulsions in this study) is effective and can cause the enlulsion to break. A demulsifier that will bring about this drop in the threshold voltage at low concentration ir very desirable. The technique is also well adapted for rapidly screening demulsifiers. INTRODUCTION Stable emulsions in produced reservoir fluids resulting from certain well stimulation and completion procedures are common problems. The use of suitable demulsifiers can often mitigate these difficulties. At the present time, a rapid and efficient method for selecting satisfactory demulsifiers is not available. It is badly needed. Reliance is now placed primarily on trial-and-error procedures. A new test method has been developed which permits a more rapid and precise selection of demulsifiers. It involves measuring the electrical stability potential of an emulsion before and after a demulsifier has been added. This paper describes this method and shows where it should have application in field emulsion problems. NATURE OF OILFIELD EMULSIONS Two immiscible components must be present for an emuhion to form; we are concerned here with crude oil and water. An emulsifier must be present for tin emulsion to be stable. J Emulsifiers can be substances which are soluble in oil and /or mter and which lower interfacial tension. They can be colloidal solids such as bentonite, carbon, graphite, or asphalt which collect at the interface and are preferentially wet by one of these phases. Unrefined crude oils can contain both types of emulsifiers, A popular theory is that, of the two phases in an emulsion, the dispersed phase will be the one contributing most to the interfacial tension.' Usually this phase contains the least amount of emulsifier. The stability of a water-in-oil emulsion is affected by the fol1owing: l) viscosity; (2) particle or droplet size; (3) interfacial tension between the phases; (4) phase-volume ratios; and (5) the difference in density between the phases. A stable emulsion is usually characterized by high-viscosity, small droplets, low interfacial tensions, small differences in density between its phases, and slow separatian of the phases. It also has low conductivity (high electrical stability potential). Water-in-oil and oil-in-water emulsions"' are both common; however, oil field emulsions are predominantly water-in-oil emulsions. The emulsions which commonly occur during oompletion and stimulation operations contain a combination of several of the following: acids, fracturing fluids (oil, water, acid), and formation water and oil. Produced emulsions usually contain formation water and oil. Emulsions form in oil wells because oil and water are mixed together at a high rate of shear in the presence of a naturally occurring or unavoidably produced emulsifier. During the completion and stimulation of productive zones, and while formation fluids are being produced, oil and water are very often commingled. These mixtures are formed into emulsions by agitation which occurs when the fluids are pumped from the surface into the matrix of the formation or produced through the formation to the surface. Restrictions to flow (such as perforations, pumps, and chokes)".'" increase the level of agitation; tight emulsions are more likely to form under these conditions. Often an emulsified droplet is an emulsion itself.'" Therefore, emulsion-breaking problems can be quite complex. The complexity can be even greater if a third phase (gas) is included. Demulsifiers operate by tending to reverse the form of the emulsion. During this process, droplets of water become bigger, viscosity is lowered, color becomes darker, separation of the phases faster and electrical stability potential approaches zero. Any of these effects could be followed as a means of determining emulsion stability. However, electrical stability potential is the most reproducible and most easily measured parameter for following the stability of a water-in-oil emulsion.
Jan 1, 1966
-
Institute of Metals Division - The Immiscibility Limits of Uranium with the Rare-Earth MetalsBy A. H. Daane, J. F. Haefling
The limits of miscibility in some of the uranium rare-earth alloy systems have been determined in the temperature range 1000°to 1250°C. The solubilities of lanthanum and cerium in uranium are greater than those of the remaining rare earths by a factor of more than two. The solubility of uranium is greater in cerium, braseodymium, and neodymium than in the other rare-earth metals studied. The values found in this study are in qualitative agreement with those which might be expected if the solubility rules of Hildebrand and Scott are applicable. AS interest in nuclear reactors intensifies, many new types of fuels are being suggested in attempts to improve the economics of some of the proposed reactor schemes. To remove some of the difficulties inherent in the use of solid-fuel elements and their reprocessing, many types of liquid-metal reactors have been suggested. One of the more attractive features of several of these reactor concepts is that they include a continuous or semicontinuous process for the extraction of fission products and "bred" fissionable materials from the fuel, utilizing immiscible metal extractants. This would enable a much higher burn-up of fissionable material to be achieved and would present a very attractive economic picture. Several studies have been reported on equilibrium systems in which there exists a high degree of immiscibility between uranium and another metal that might be used as an extractant in such a processing scheme.' Two of these systems in which a high degree of immiscibility exists are those of uranium with the two rare-earth metals, lanthanum, and cerium. Since the rare earths constitute a significant fraction of the fission products, their removal is of prime importance. It is reasonable to believe that this might be accomplished by equilibrating a rare-earth phase with the contaminated uranium fuel in the liquid state. In order to make a more complete study of those systems which would be of interest either as extractants in a liquid-liquid extraction process, or as fission products formed in the fuel, the alloy systems of uranium with lanthanum, cerium, praseodymium, neodymium, and samarium were studied in some detail in the temperature range 1000" to 1250°C; less detailed studies were made with the other rare earths. In addition to being of value to the reactor program, the data obtained in this study should be of help in making a study of the role played by the electronic structures of metals in determining the nature of metallic solutions. The unique electronic structures of the rare-earth elements make them particularly interesting in this respect. EXPERIMENTAL The usual procedure for a solubility determination was to seal equal volumes of uranium and the particular rare earth in a tantalum crucible under an atmosphere of helium; this crucible was then sealed in a stainless steel jacket in an atmosphere of helium. These samples were equilibrated by repeated inverting of the crucibles in a furnace for 15 min at the desired temperature, left in an upright position for 15 min to permit separation of the two phases, and then quenched under a stream of water. In some runs the temperature of the furnace was held 50' to 100°C above the desired quenching temperature while inverting in order to insure good mixing. However, it was found that above 1200°C the crucibles were subject to failure and for these runs the furnace temperature was not raised above the desired quenching temperature. A small amount of tantalum was dissolved in the uranium and the rare earths in these runs, a maximum of 3 wt pct in the uranium phase at 1250°C and up to 1 wt pct in the rare-earth phase at this temperature. On cooling, the major portion of this tantalum precipitated as primary tantalum crystals. Any residual tantalum would probably have a negligible effect on the mutual solubility of uranium and the rare earths in each other. Samples for analysis were cut from each phase with an abrasive cutting wheel; the region near the interface between the two metals was carefully avoided. In the case of the rare earths with melting points above 1250°C no solubility data were taken on the rare-earth phase since this phase could not have achieved equilibrium in a reasonable length of time. (For the same reason no data were taken on the uranium phase below its melting point of 1132°C.) Equilibrium appeared to have been reached in the uranium phase in these cases although the rare-earth phase had not melted. To verify this, samples were melted together in an arc furnace similar to that described by Kroll.2 These samples were sub-
Jan 1, 1960
-
Institute of Metals Division - Surface Diffusion of Gold and Copper on CopperBy Jei Y. Choi, P. G. Shewmon
The surfrrce-diffusion coefficients (DJ for Aulg8 on (100) and (111) surfaces of copper have been determined between 1050" and 780°C using a new avuzlysis imd experimental procedure. The results are: D, has also been determined fm cua4 at 870°C, and the values found are 4.5 times larger than those measured by the grain boundary grooving technique for the same surface orientations. This difference is felt to result from the approximate nature of the mathematical solution used in the present work. Attempts to measure D, for silver on copper and silver surfaces indicated a means of matter transport different from surface diffision was dominant in moving tracer from the source out over the surface. Cnlculations and experiment both indicate that this is the flow of silver through the vapor phase which completely masks the much smaller flow due to surface diffusion. The previous self-difhsion studies of D, for silver and copper are discussed in terms of our own analysis and found to yield values of D, factors of lo5 or more greater than those found by the grain boundary grooving tech -nique. UNTIL about 5 years ago it was widely believed that the activation energy for surface diffusion, AH, , was less than that for grain boundary diffusion, AHb,, which in turn was less than that for diffusion through the lattice, AHz.' This was concluded from various evidence that D,> Db>Dl, and one tracer study of D, for silver on silver from which AH, was inferred.2 In 1959 Mullins and Shewmon demonstrated that D, could be determined from the kinetics of the growth of grain-boundary grooves.3 Using this procedure, Gjostein measured D, on copper between 800" and 1050°C and found that the activation energy was roughly equal to AHl .4 Subsequent work on copper,5" silver,',' and goldg between the melting temperature T, and 0.87 T, confirmed that AH, as determined using the grain boundary grooving or scratch-relaxation technique was equal to or greater than AHz. During the same period, Drew and Pye again determined AH, for silver on silver using a tracer techniquelo and a mathematical solution similar to that of Nicker son and arker.' Though the values of D, Drew and Pye measured at any given temperature were about 200 times smaller than those reported by Nickerson and Parker, they again found a low activation energy of about 10 kcal, or about one fifth that found at the higher temperatures with the mass transport technique. A distinguishing characteristic of these two previous tracer studies is that they have worked at low temperatures (-1/2 T,) where they felt volume diffusion was negligible and then analyzed these data as if all tracer atoms leaving the source flowed out into and remained in a homogeneous high-diffusivity surface layer of undefined thickness. This is totally different from the model used in the mass-transport studies or the studies of grain boundary diffusion, which assume the high-diffusivity surface layer to be only a few angstroms thick. If this latter model is applied to the earlier tracer studies, it is shown that the tracer has really pe!etrated into the lattice a mean distance of 1000A. Thus the tracer distribution observed after an anneal is thought to be due to the combined effects of surface and volume diffusion. Independent of the relative validity of the two models, it seems evident to us that any comparison of the values of D, as determined in these two ways is meaningless and misleading, since the values of D, and AH, obtained in these two ways would be totally different for the same physical distributions of tracer. Once the fundamental difference in the approaches of the two techniques is established, we are faced with the question of which model better approximates physical reality. Here all the evidence seems to be on the side of the ''thin surface layer" analysis. In fact, the authors of Refs. 2 and 9 do not argue for the "thick-layer model" we have described; they simply invoke it through the equation they use to calculate D, . The primary evidence for the thin-film approach is: a) grain boundary grooves and scratches widen in proportion to tU4 and Mullins' rigorous analysis shows that this is only valid for a surface layer which is quite thin relative to the width of the groove;11 b) all accepted or seriously discussed models of solid-vapor interfaces and high-angle grain boundaries assume that the disturbed region of the interface is at most a few a0 thick. With the above in mind, it was desirable to determine D, using a radioactive tracer and a "thin-
Jan 1, 1964
-
Part VI – June 1969 - Papers - The Effects of Solute Additions on the Stacking Fault Energy of a Nickel-Base SuperalloyBy P. S. Kotval, O. H. Nestor
Stacking fault energy measurements of nickel-base alloys have been mainly confined to binary and ternary systems. In this paper, the stacking fault energy has been measured by the rolling texture method in a series of ten alloys which comprise successive additions of Cr, Mo, Fe, and C to pure nickel, eventually resulting in an alloy of the composition of Hastelloy alloy X. The alloys studied here are single-phase, solid solutions with the exception of two alloys in which some undissolved particles of "primary" carbide have been retained. It is found that successive additions of chromium, molybdenum, and iron all lower the stacking fault energy, with iron having only a minor effect. The stacking fault energy is found to increase when carbon is added in solid solution. The results from the rolling texture measurements are correlated with thin foil observations of dislocation substructures in these alloys. In a recent paper' it was pointed out that the dislocation substructure of various superalloy matrices could be classified into three broad categories based on 'high', 'medium', and 'low' stacking fault energy. It has also been demonstrated2 that the dislocation substructure in each of these categories has a well defined role in the nucleation of strengthening precipitates which is different from the role played by the dislocation substructure in other categories. Thus, it becomes desirable to understand the influence of various solute elements on the stacking fault energy and hence on the dislocation substructure of the matrix, before any further development of superalloys by mi-crostructural predesign can be undertaken. Recently, Beeston and France have studied the influence of increasing solute additions on the stacking fault energy of a series of binary nickel-base alloys relevant to the Nimonic series using the rolling texture method, and have then estimated the effect of a given alloy addition in five commercial Nimonic alloys. However, comparison with stacking fault energy data from other investigations''5 suggests that the influence of a given solute element in a nickel-base binary system is not necessarily the same in a ternary or more complex superalloy system. Accordingly, the present work was undertaken to study the effect of successive addition of solute elements to pure nickel, the final composition being the nominal composition of Hastelloy X. The rolling texture method of stacking fault energy measurement was used since it can be used for the whole range of stacking fault energy values and does not have the disadvantage of, say, the Node method which is only applicable to low values of stacking fault energy. In addition, the rolling texture method provides a means of determining the stacking fault energy which is statistically more significant than that provided by other methods. EXPERIMENTAL TECHNIQUES Button heats of alloys of the compositions shown in Table I were prepared. Each button was remelted not less than four times. After a slight deformation (approximately 5 pct) all alloys were homogenized at 2200°F except alloys, H . I, and J. Alloys H and I were solution heat treated at 2150°F and alloy J at 2282OF. The buttons were cold worked by rolling, using "end-to-end" passes and intermediate anneals at the homogenization temperatures mentioned above. After each annealing treatment the samples were rapidly water quenched to avoid any precipitation. In alloys F and I, however, a few particles of "primary" carbides were retained even after the homogenization treatments at the temperatures mentioned above. Part of the solution heat treated material was cold worked to 0.04-in.-thick sheet and the penultimate reduction was -50 pct of deformation as recommended by Dillamore et al. All annealing was carried out in vacuo within sealed quartz capsules. Some of the material from each alloy was rolled down further to 0.004 in. strip for thin foil transmission electron microscopy specimens. Specimens of this strip were annealed at the homogenization temperature for 1 hr and then strained 7 pct by rolling at room temperature. Thin foils were prepared from the strip specimens by the 'window" technique using an Ethanol-Perchloric acid electrolyte at 32°F and a voltage of 22 v. Stainless steel cathodes were employed. All transmission electron microscopy was performed in a JEM-7 electron microscope using an accelerating voltage of 100 kv. Specimens from the 0.04 in. sheet which had been rolled -60 pct in the final pass were electropolished to remove the surface layers to a depth of approximately 0.002 in. Rolling texture pole figures for all the alloys were determined using a Schulz ring and nickel filtered CuKa radiation at 50 kv and 20 ma. The texture parameter Io/(lo + I,,) (where Io is the
Jan 1, 1970
-
Reservoir Engineering-General - A Study of Forward Combustion in a Radial System Bounded by Permeable MediaBy G. W. Thomas
A mathematical tnodel of forward combustion in an oil reservoir is treated in this paper. The model describes a radial system having a vertical section of essentially infinite thickness, all of which is permeable to gas flow. Combustion, however, is presumed initiated over a limited thickness of the total vertical section. In the interval supporting cotnbustion, the mechanisms of radial conduction, convection and heat generation are taken into account. Above and below the burning interval, heat transport in the radial direction is by cottduction and convection. Vertical heat losses from the ignited interval are accounted for by conduction alone. A general solution is presented for the temperature distribution caused by radial movement of the combustion front. The results show that no feedback of heat occurs into the ignited interval when convection and conduction are acting in the bounding media. Peak temperatures are also 5 to 10 per cent higher than in the case where heat transport in the bounding media is by conduction alone. We arbitrarily define vertical coverage to be that fraction of the total ignited interval which is at 600F above atnbient, or greater, at any given time. The radial distance at which the vertical coverage becomes zero is the propagation range of the combustion front. It was found that an increase in vertical coverage results when the oxygen concentration, fuel concentration or gas-injection rate is increased. Moreover, the combustion front can be propagated 10 to 15 per cent further than in the case where only conduction is acting above and below the ignited interval. INTRODUCTION In the theoretical treatment of forward combustion in a radial system, one of the problems encountered is the determination of the transient temperature distributions caused by an expanding cylindrical heat source. Bailey and Larkin' and Ramey' simultaneously presented analytical solutions to the problem assuming heat transport by conduction alone. In a subsequent publication, Bailey and Larkin3 included the effects of both conduction and convection while treating linear and radial models. In this latter work, however, vertical heat losses were largely neglected. Selig and Couch' dealt with a radial model in which both conduction and convection were acting. Only a limiting case involving vertical heat losses was considered, however. Namely, temperatures on the boundary of the bed of interest were set equal to zero. Solutions thus obtained were representative of a system having a maximum vertical heat flux. Chu5 recently treated a more general case in which a permeable bed was considered bounded by impermeable media. Conduction and convection took place within the bed, and only conduction outside of the bed. The effects of vertical heat losses were included in his study. Solutions were obtained by numerical techniques. This paper is an extension of the theoretical work of other authors pertaining to forward combustion in a radial system. In particular, a mathematical model of the process is treated in which heat generation occurs over a small vertical interval of a larger permeable section. In the interval supporting heat generation, and above and below this interval, the mechanisms of radial conduction and convection are also presumed acting. Heat losses from the ignited interval are accounted for by vertical conduction. An analytical solution for the temperature distribution caused by radial movement of the burning front is presented. The effects of certain process variables are indicated and comparisons with Chu's results are made. THEORY To render the mechanism of forward combustion tractable to mathematical treatment, we idealize the problem to the extent of assuming continuous reservoir media possessing homogeneous and isotropic properties. The following additional assumptions are implicit in this analysis. 1. The thermal parameters, i.e., heat capacities, thermal conductivities and thermal diffusivities are invariant with temperature and pressure. Moreover, the bounding media possess the same thermal properties as the bed of interest. 2. The temperatures of the porous media and its contained fluids at any point and at any time are equal. 3. The reaction rate between the oxidant gas and the fuel is infinite. This assumption implies that the incoming oxygen concentration instantaneously goes to zero within an infinitesimal distance, i.e., the width of the combustion zone is negligible. 4. The rate of gas injection is constant and corresponds to the average rate throughout the lifetime of the project. 5. The fuel concentration is constant throughout the volume of rock swept out by the burning zone. 6. There is complete burnoff of fuel. This assumption demands that the rate of propagation of the burning front equals the rate of fuel burnoff. In a radial system, with a
-
Reservoir Performance - Field Studies - Reservoir Performance of a High Relief PoolBy E. P. Burtchaell
A method is presented for evaluating the effect of gravity drive upon the reservoir performance of a high relief pool. Conventional forms of reservoir analysis do not consider the alterations in the basic material balance data caused by gravity segregation of reservoir fluids. A procedure is outlined for structurally weighting physical and chemical data for use in the material balance equation. It is demonstrated how actual pool performance data can be utilized to evaluate the future reservoir performance of a gravity drive pool. INTRODUCTION Conventional reservoir engineering. procedure is inadequate for the analysis of an oil pool which has considerable structural relief, steep dips, and good permeability development. In, pools of this type, gravity drainage has an important part in the movement of oil to the wells and the effects of gravity on the overall pool performance should be included in any analysis of reservoir behavior. Many engineers have the opinion that the force of gravity in the movement of oil is not important until the later life of a pool.' Probably the basis for this belief is that gravitational effects may not be readily discernible until a pool is nearing depletion. This would be especially true for pools not having a high degree of structural relief and permeability development. Actually the effects of gravitational forces are at a maximum when the pool pressure is high, for during this period the hydrostatic head of the oil column is at a maximum and the viscosity of the oil is at a minimum. Oil recoveries from pools having favorable gravity drive characteristics may equal or even exceed recoveries which might be expected from water displacement. Field evidence indicates that in some reservoirs gravity drive has resulted in recoveries greater than that which could have been expected from gas expansion or water drive.'.3 Unfortunately, the possible effects of gravity drive on pool performance have been underestimated and other reasons have been sought to explain the high recoveries obtained. There are unquestionably many reservoirs to which the principles of gravity drainage can be effectively applied. It is the purpose of this paper to illustrate one method whereby gravity drive is included in the reservoir analysis of an oil pool. A hypothetical pool, typical of many California reservoirs, is used as an example. As used in this paper, "gravity drive" is defined as the overall effect of gravitational influences on the recovery of petroleum from the reservoir; "gravitational segregation" as the gravity separation of oil and gas within the reservoir; and "gravity drainage" as the downward movement of oil as caused by the force of gravity. SAND VOLUME DATA Fig. 1 presents a structural contour map of the pool under study. Maximum closure is 1950 feet with dips on the south flank approaching 45". The original gas-oil interface was set at -5200 feet. Average thickness of the producing sand was 200 feet. For use in subsequent calculations ill this paper, the pool was subdivided into 100-foot vertical increments and the sand-volume content of each increment was obtained. If the gross sand thickness is small, under 100 feet, the sand-volume content can be obtained by superimposing an isopachous map upon a structural contour map and planimetering the average thickness of each 100-foot increment. For sand thicknesses over 100 feet, one approacli would be to construct a sufficient number of cross-sections of the pool from which the weighted sand-volume of each 100-foot increment could be obtained. Variations in the sand body with depth, as determined by core data, can also be included in the above process. Table I presents a summary of sand-volume calculations, core data, and the original distribution of reservoir hydrocarbons in the pool. Fig. 2 illustrates the structural distribution of the sand-volume content. A total of 171,398 acre-feet is contained within the productive limits of the pool. Assuming an average porosity of 25% and an interstitial water content of 20%, the original hydrocarbon content was computed to be 227,075,000 barrels. DEPTH-PRESSURE DATA The determination of the initial vertical pressure arrangement in the pool is necessary for PVT and material balance calculations. Whenever sufficient data are available, a plot of pressure versus subsea depth of measurement should be made. From this plot a representative fluid pressure gradient can be established. Lacking sufficient initial pressure data, an initial pressure gradient may he estimated or calculated from avail-
Jan 1, 1949
-
Reservoir Performance - Field Studies - Reservoir Performance of a High Relief PoolBy E. P. Burtchaell
A method is presented for evaluating the effect of gravity drive upon the reservoir performance of a high relief pool. Conventional forms of reservoir analysis do not consider the alterations in the basic material balance data caused by gravity segregation of reservoir fluids. A procedure is outlined for structurally weighting physical and chemical data for use in the material balance equation. It is demonstrated how actual pool performance data can be utilized to evaluate the future reservoir performance of a gravity drive pool. INTRODUCTION Conventional reservoir engineering. procedure is inadequate for the analysis of an oil pool which has considerable structural relief, steep dips, and good permeability development. In, pools of this type, gravity drainage has an important part in the movement of oil to the wells and the effects of gravity on the overall pool performance should be included in any analysis of reservoir behavior. Many engineers have the opinion that the force of gravity in the movement of oil is not important until the later life of a pool.' Probably the basis for this belief is that gravitational effects may not be readily discernible until a pool is nearing depletion. This would be especially true for pools not having a high degree of structural relief and permeability development. Actually the effects of gravitational forces are at a maximum when the pool pressure is high, for during this period the hydrostatic head of the oil column is at a maximum and the viscosity of the oil is at a minimum. Oil recoveries from pools having favorable gravity drive characteristics may equal or even exceed recoveries which might be expected from water displacement. Field evidence indicates that in some reservoirs gravity drive has resulted in recoveries greater than that which could have been expected from gas expansion or water drive.'.3 Unfortunately, the possible effects of gravity drive on pool performance have been underestimated and other reasons have been sought to explain the high recoveries obtained. There are unquestionably many reservoirs to which the principles of gravity drainage can be effectively applied. It is the purpose of this paper to illustrate one method whereby gravity drive is included in the reservoir analysis of an oil pool. A hypothetical pool, typical of many California reservoirs, is used as an example. As used in this paper, "gravity drive" is defined as the overall effect of gravitational influences on the recovery of petroleum from the reservoir; "gravitational segregation" as the gravity separation of oil and gas within the reservoir; and "gravity drainage" as the downward movement of oil as caused by the force of gravity. SAND VOLUME DATA Fig. 1 presents a structural contour map of the pool under study. Maximum closure is 1950 feet with dips on the south flank approaching 45". The original gas-oil interface was set at -5200 feet. Average thickness of the producing sand was 200 feet. For use in subsequent calculations ill this paper, the pool was subdivided into 100-foot vertical increments and the sand-volume content of each increment was obtained. If the gross sand thickness is small, under 100 feet, the sand-volume content can be obtained by superimposing an isopachous map upon a structural contour map and planimetering the average thickness of each 100-foot increment. For sand thicknesses over 100 feet, one approacli would be to construct a sufficient number of cross-sections of the pool from which the weighted sand-volume of each 100-foot increment could be obtained. Variations in the sand body with depth, as determined by core data, can also be included in the above process. Table I presents a summary of sand-volume calculations, core data, and the original distribution of reservoir hydrocarbons in the pool. Fig. 2 illustrates the structural distribution of the sand-volume content. A total of 171,398 acre-feet is contained within the productive limits of the pool. Assuming an average porosity of 25% and an interstitial water content of 20%, the original hydrocarbon content was computed to be 227,075,000 barrels. DEPTH-PRESSURE DATA The determination of the initial vertical pressure arrangement in the pool is necessary for PVT and material balance calculations. Whenever sufficient data are available, a plot of pressure versus subsea depth of measurement should be made. From this plot a representative fluid pressure gradient can be established. Lacking sufficient initial pressure data, an initial pressure gradient may he estimated or calculated from avail-
Jan 1, 1949
-
Producing – Equipment, Methods and Materials - Pressure Measurements During Formation Fracturing OperationsBy H. D. Hodges, J. K. Godbey
In order to better understand the fracturing process, bottom-hole pressures were measured during a number of typical fracturing operations. A recently developed system was used that allows simultaneous surface recording of both the bottom-hole and wellhead pressures on the same chart. The results from six fracruring treatments are summarized on the basis of the pressure data obtained. Al-though no complete analysis is attempted, the value of accurate pressure measurements is emphasized. Important characteristics of the bottom-hole pressure record do not appear at the wellhead because of the damping effect of the fluid-filled column. In four of the six treatments described, the formations apparently fractured during the initial surge of pressure with only crude oil in the well. The properties of the fluids used during the treatments are given and the fluid friction losses are obtained directly from the pressure records. This technique is also shown to be adequate for determining when various fluids, used during the process, enter the formation. INTRODUCTION Hydraulic fracturing for the purpose of increasing well productivity is now accepted in many areas as a regular completion and workover practice. Numerous articles have appeared in the literature discussing the various techniques and theories of hydraulic fracturing'. In general, three basic types of formation fractures are recognized today. These are the horizontal fracture, the vertical fracture, and fractures along natural planes of weakness in the formation'. Any one or all three of these fracture types may be present in a fracturing operation. However, with only the wellhead pressure record as a guide, it is difficult at best to determine if the formation actually fractured, and is almost impossible to determine the type of fracture induced. These difficulties arise in part because the wellhead pressure record, especially when fracturing through tubing, does not accurately reflect the pressure variations occurring at the formation. Several factors contribute to this effect and preclude the possibility of using the wellhead pressure as a basis for accurately calculating the bottom-hole pressure. These factors are: 1. The compressibilities of the fluids which damp the pressure variations. 2. The changes in the densities of the fluids or apparent densities of the sand-laden fluids. 3. The flowing friction of the various fluids and mixtures, which is dependent on the flow rates and the condition of the tubing, casing, or wellbore. 4. The non-Newtonian characteristics of a sand-oil mixture and its dependence upon the fluid properties, the concentration of sand, and the mesh size used. 5. The unknown and variable temperatures throughout the fluid column. Because of these reasons it was determined that in order to obtain a more accurate knowledge of the nature of fracturing, the bottom-hole pressure must be measured along with the pressure at the surface during a fracturing treatment. Even with accurate pressure data, a reliable estimate of the nature of fracturing is still dependent upon knowledge of the tectonic conditions. However, the hydraulic pressure on the formation is basic to any approach to a complete analysis. In order to accomplish this objective a system was developed to record the wellhead and bottom-hole pressures simultaneously at the surface. By recording both pressures on a dual pen strip-chart recorder, it was possible to greatly expand the time scale so that rapid pressure variations would be faithfully recorded. By such simultaneous recording, time discrepancies inherent in separate records are eliminated, thus overcoming one of the most difficult problems associated with bottom-hole recording systems. This paper illustrates the results obtained by using this system during six typical fracturing operations. All of these tests were taken in wells that were treated through tubing. By a direct comparison of the wellhead and bottom-hole pressures, the importance of obtaining complete pressure information during a fracturing treatment is emphasized. THE INSTRUMENTATION AND PROCEDURES The bottom-hole pressure measuring instrument consisted of a pressure-sensing element, a telemetering section, and a lead-filled weight or sinker bar. The pressure-sensing element used was an isoelastic Amerada pressure-gauge element. By using an isoelastic element, no temperature compensation was necessary in the tests described, since the temperature was believed to be well below the maximum temperature limit of 270°F. The rotary output shaft of this helical Bourdon tube element was coupled to a precision miniature potentiometer. The rotation of the pressure-gauge shaft thus changed the resistance presented by the potentiometer
-
Industrial Minerals - Natural Abrasives in CanadaBy T. H. Janes
NATURAL abrasives of some type are found in all countries of the world. In order of their hardness the principal natural abrasives are diamond, corundum, emery, and garnet, which are termed high grade, and the various forms of silica, including pumice, pumicite, ground feldspar, china clay and, most important, sandstone. The properties qualifying materials for use as abrasives are hardness, toughness, grain shape and size, character of fracture, and purity or uniformity. For manufacture of bonded grain abrasives such as grinding wheels, the stability of the abrasive and its bonding characteristics are also important. No single property is paramount for all uses. Extreme hardness and toughness are needed for some applications, as in diamonds for drill bits, while for other purposes the capacity of the abrasive to break down slowly under use and to develop fresh cutting edges is of greatest importance, as with garnet for sandpaper. In dentifrices, soaps, and metal polishes, of course, hardness and toughness are objectionable. First among the natural abrasives, industrial diamonds are essentially of three types: l—bort, which includes off-color, flawed, or broken fragments unsuitable for gems; 2—carbonado, or black diamond, a very hard and extremely tough aggregate of very small diamond crystals; and 3—ballas, a very hard, tough globular mass of diamond crystals radiating from a common center. Bort comes from all diamond-producing centers, carbonados only from Brazil, and ballas chiefly from Brazil, although a few of this last group come from South Africa. By far the largest producer of industrial diamonds is the Belgian Congo; the Gold Coast, Angola, the Union of South Africa, and Sierra Leone supply most of the remainder. There is no production in Canada, which imports $6 to $9 million worth of industrial diamonds annually. Industrial diamonds find innumerable uses in modern industry. They are used for diamond drill bits for the mining industry; in diamond dies for wire drawing; in diamond-tipped tools for truing abrasive wheels and for turning and boring hard rubber, fibers, and plastics; and in diamond-toothed saws for sawing stone, glass, and metals. High-speed tool steels, cemented carbides, and other hard, dense alloys can be cut, sharpened, or shaped efficiently only with diamond-tipped tools and diamond grinding wheels. .. Second only to the diamond in hardness is corundum, an impure form of the ruby and sapphire gems consisting of alumina and oxygen (Al²O³) with impurities such as silica and ferric oxide. Corundum generally crystallizes from magmas rich in alumina and deficient in silica, as in the nepheline syenites of eastern Ontario. Grain corundum is used in the manufacture of grinding wheels; very coarse grain is used in snagging wheels. Both types of wheels are employed in the metal trades, where the hardness of corundum, coupled with its characteristic fracturing into sharp cutting edges, makes it an ideal cutting tool. The finest corundum (flour grades) is used for fine grinding of glass and high-precision lenses. From 1900 to 1921 Canada was the world's leading producer of corundum. Following this period the deposits located in northern Transvaal of the Union of South Africa supplied more and more of the world's requirements, and since 1940 South Africa has provided almost the entire output, which has ranged between 2500 to 7000 tons a year during the last decade. Minor amounts have also been produced in Mozambique, India, and Nyassaland. Opportunities for Mining Corundum Corundum deposits in southeastern Ontario are of three types, which may be described as follows: 1—Scattered, irregularly-shaped deposits of coarse-grained corundum which could be mined by means of small pits. About 10 groups of such deposits are known. Although the tonnage of individual deposits of this type is not great, it has been estimated that several years' ore supply is available for a small tonnage operation. Deposits average about 9 pct corundum. 2—Large irregular deposits of coarse-grained corundum which would require mining by adit with possibly a scavenger operation on the remains of former surface deposits. The Craigmont deposit of this type produced about 20,000 tons of corundum concentrate during operations between 1900 and 1913. Most of the readily available surface ore was removed by operators during that time. Reserves of ore above road level have been estimated to average 7 pct corundum, but none of the so-called reserves have been blocked out, or even indicated, by diamond drilling. From 1944 to 1946, 2025 tons of
Jan 1, 1955
-
Producing - Equipment, Methods and Materials - A Theoretical Analysis of Steam StimulationBy J. C. Martin
A theoretical analysis of steam stimulation is presented for single sands. The analysis includes the effect of steam production and most of the effects of heat conduction. The results show the effects of a number of important variables on the performance of an idealized well under steam stimulation. Calculated responses are presented which indicate the effects of steam production, amount of steam injected, water production, formation thickness and formation damage. Results indicate that steam production can cause large reductions in the heat contained in the formation. This effect can be eliminated by drawdown control. Water production reduces the amount of oil produced during stimulation. The optimum amount of steam to inject depends on economic factors as well as the well response. In many cases, the increased temperature resulting from stimulation reduces oil viscosity near the well sufficiently to overcome the effects of formation damage even if the damage is not removed during steam injection. Calculated responses for thin sands are more favorable than anticipated. INTRODUCTION Little has been published on the theory of steam stimulation'-' despite the interest it has created and the wide variation in well responses. The results of the present analysis provide an insight into steam stimulation, and the methods employed provide a foundation for future work. Analyses presented in Refs. 1 and 2 are very limited and apply to gravity drainage conditions. Ref. 3 contains an analysis similar to the one presented here. The idealized models used and the assumptions made in Ref. 3 are different from those used in this paper. The analysis assumes that after steam injection has heated a small portion of the volume within the radius of drainage of a single uniform sand, a shut-in soaking period is allowed before returning the well to production. The effects of gravity, capillarity, transient pressure and water-sensitive sands are neglected. The injection and soaking times are assumed short compared to the stimulated production time. The initial temperature is assumed uniform; thus, the results apply primarily to first-cycle stimulation. The effects of gas production other than steam are neglected, and the water-oil ratio during production is assumed constant. Steam stimulation involves the simultaneous variation of the temperature, pressure and saturations. General mathematical equations for these variations are complicated and can be very difficult to solve. Simplified equations based on idealized models are used to reduce the mathematics sufficiently to allow approximate solutions to be obtained. DISCUSSION INJECTION An idealized model for which heat conduction is neglected is used to represent the behavior of a well during steam injection. The mathematics for this model is presented in Appendix A which also contains an approximate solution for the behavior of the no-conduction model. SOAKING During soaking, the well is shut in. Only the temperature, pressure and saturation distributions at the end of soaking are needed in the analysis. During soaking the heat is considered to be conducted in a uniform medium from an initially uniformly-heated circular cylinder confined to the producing interval. At the end of soaking the saturations and pressures are assumed to correspond to the cold zone. Analysis of the heat flow during soaking is included in the next section. The radius of the heated cylinder is calculated from the following heat balance (for constant quality steam injection). At the end of the soaking period it is assumed that little or no free gas is present near the well, and that the soaking period has been sufficiently long that the steam zone has had time to expand and the steam has condensed. The condition where there is no soaking is considered in the next section. PRODUCTION In this section, an approximate method is presented for solving the equations of heat and fluid flow associated with the production of oil and water during steam stimulation. Where initial pressure drawdowns are sufficient to cause steam to flow into the wellbore, steam production is assumed to occur within a short initial adjustment period (Appendix B). The production practices followed soon after the well is returned to production can have a large influence on the amount of oil produced during the stimulation cycle. Under most conditions, there is a period of time in which some or all of the water in the heated zone is converted into steam and produced. This flashing of hot water is caused by the pressure in the heated zone dropping below
-
Part XII – December 1968 – Papers - Deformation Behavior in the Near-Equiatomic Ni-Ti AlloysBy M. J. Marcinkowski, A. S. Sastri
A detailed compressive stress-strain analysis and transmission electron microscopy investigation has been made of the deformation behavior occurring in a 50 at. pct Ni-Ti (hypoeutectoid) alloy and a 54.5 at. pct Ni-Ti (hypereutectoid) alloy. In the case of the hypoeutectoid alloy, three stages of work hardening are observed. Stage I occurs at a very low stress and is associated with plastic deformation via martensite formation. Stage 11 is characterized by very rapid work hardening and is due to difficulties in causing further deformation in the fine martensite aggregate produced in Stage I. Stage III which occurs at very high stress levels is characterized by smaller work hardening rates and is due to the plastic deformation arising from alternate reconversions of the original martensites to martensites of varying orientation. Rapid quenching of the hypereutectoid alloy leads to very high yield strengths and is related to a fine precipitate dispersion that such treatment brings about. The present investigation represents the final phase of a three-part study directed toward an understanding of the solid-state transformations in near equi-atomic Ni-Ti alloys as well as the deformation mechanisms associated with these alloys. In the first part,"2 to be henceforth referred to as I, it was found that alternate simple shears on {112} planes and in (111) directions convert the parent B2 structure in the equiatomic NiTi alloy into two distinct close-packed monoclinic martensites. All of the marten-sites were of this type, whether they were formed by cooling or by plastic deformation, whether induced to form in bulk samples or in thin foils, or whether examined in the electron microscope at room temperature or below. On the other hand, in the second part of this investigation,3 to be reffered to as 11, it was shown that upon slow cooling to about 640°C. alloys in the neighborhood of NiTi which possess the B2 structure transform eutectoidally into their equilibrium phases Ti2Ni and TiNi3. However, preceding the formation of these equilibrium phases a series of metastable intermediate phases are formed. This paper will set as its goal the elucidation of the remarkable deformation behavior exhibited by NiTi. In particular, Buehler and Wiley4 have found equiatomic NiTi to be surprisingly soft, while Buehler et al.5 have shown this alloy to possess a memory effect: i.e., upon bending at room temperature it will revert to its original shape when heated to above about 50°C. In I it was shown that NiTi was soft in the sense that the yield stress was low; nevertheless, the alloy work-hardened at an extremely rapid rate to very high stress levels. On the other hand, the hypereutectoid alloys with somewhat higher nickel, say 54.5 at. pct (60 wt pct) have enormously increased yield strengths compared to those of the equiatomic alloys. In order to determine the atomistic processes giving rise to the above behavior, it was decided to examine samples that were wafered from bulk specimens deformed in compression to various strains using the techniques of transmission electron microscopy. EXPERIMENTAL TECHNIQUE All of the alloys used in the present investigation contained either 50 at. pct Ni (55.06 wt pct) or 54.5 at. pct Ni (60 wt pct) and were arc-melted in the form of a finger using the same techniques described in I and II. The finger was capsulated in a stainless-steel jacket and swaged at 850°C into rods. Compression specimens 0.300 in, long and 0.200 in. in diam were machined from these rods. In order to completely re-crystallize the samples and remove residual stresses, all of them were capsulated in evacuated quartz, annealed for 1/2 at 1050°C. and then furnace-cooled. Compression tests were carried out in an Instron tensile testing machine covering a range of temperatures from —196° to 200°C using procedures described previously.6'7 In all cases crosshead speed was 0.02 in. per min. Wafers 0.015 in. thick were spark-cut from the cylindrical samples at 45 deg to the compression axes after they had been deformed to the desired strain. These specimens were then spark-planed to about 0.005 in. and then electrochemically thinned for examination by transmission electron microscopy as described in I.
Jan 1, 1969
-
Economics Of Pacific Rim CoalBy C. Richard Tinsley
Like most minerals, coal is inherently a demand-limited commodity. The very sedimentary nature of its occurrence implies greater availability potential than demand. But this situation is overridden by economics among fuels, between coals, and within coal blends. Such considerations make coal forecasting a very hazardous profession indeed. THERMAL COAL If one thought that the lead times involved with a mining project were very long, one has obviously not been exposed to the planning process in the electric generation business - a process seriously confounded by shifts in load growth, environmental pressures, capital intensity, security of fuel sourcing, inter-fuel economics, and so on. But as a general rule, the near-term forecasts for thermal coal can reliably be based on a bottom-up, plant-by-plant analysis. Cement plant conversions can also be reasonably estimated next in order of reliability, although they have a much wider spectrum of coal qualities and fuel sources to choose from with a notably higher tolerance for sulfur and ash. Finally, industrial demand can be assembled from the estimates for conversions by pulp/paper plants, chemical plants, etc. The industrial sector is harder to estimate, since it may involve small boilers or dual-fired units. Assessing demand in the Pacific Rim is relatively a straightforward process in the near term because the major importing countries are all located on the Asian continent with either negligible or very minor (yet stable) indigenous coal production, (itself often operated on a subsidized basis). Furthermore, all imports are seaborne. These major importers are Japan, Korea, Taiwan, and Hong Kong with Thailand, Singapore, and Malaysia up-and-coming consumers. The suppliers to this market all have substantial reserves to back up decades of exports to these countries. Australia, the US, Canada, South Africa, China, and the USSR dominate the supply side. The second oil-shock of 1979/1980 has convinced the importers that reliance on oil can be expensive and eminently interruptible. Thus, they are determined to diversify away from oil' to nuclear and coal for generating electricity and for coal for other purposes where possible. This trend is seen to continue even in the face of the oil glut worldwide and oil-price reductions in early 1982. But the importers are also convinced that reliance on one coal source and, in particular, one infrastructure route for the coal chain from mine to consumer can be equally expensive and interruptible. Strikes in the US and Australia; excessive demurrage at certain ports; relegation of coal to a lower priority on multiple-use railroads in the USSR and China; and concern over escalation on high-infrastructure or high-freight coal chains are among the risks worrying the importers. As a consequence, Pacific Rim thermal coal purchases are being allocated among supplier nations, between ports, and within each country. An example of Japan's shift away from Australia and toward the US and Canada is shown in the estimates in Table 1. But the confidence of the import estimates deteriorates sharply beyond the plant conversion timetables and construction schedules in the near term. If part of the second generation of coal-fired power plants can handle lower-energy coals, the field of suppliers could widen to accept sizeable tonnages from Alaska, Wyoming, Alberta, or New Zealand resources. These supply sources generally have some infrastructure or freight advantage to compensate for their lower quality and to compete on a delivered energy-unit basis. These also offer diversification in sourcing. And the possibility of coal liquefaction in Japan further widens the sourcing network. A great number of Pacific Rim coal forecasts have been generated, especially for Japanese thermal-coal imports which are expected to grow strongly in the 1980's. Since the Japanese themselves have not yet settled their energy policy, the exact numbers are hard to call. Nevertheless, at 50 million tonnes of imports in 1990, Japan would consume 50-60% of the total Asian thermal coal imports as shown on Tables 2 and 6. The next most important consumers are the "island" nations of Korea, Taiwan, and Hong Kong (see Tables 3-5). All three are embarking on power plant developments usually with captive unloading facilities, capable of accepting more than 100,000-dwt vessels. Korea, with no-indigenous bituminous coal, is not especially enamoured with US coals, which are deemed too heavily loaded by freight and infrastructure costs -- up to 70% of the delivered price. Thermal coal contracts are presently split to Australia (70%) and to Canada (30%). Korea Electric Power Co. is already considering second-generation boilers capable of burning lower-quality coals than the present standard. Korea does burn domestic anthracite.
Jan 1, 1982
-
Mining the San Juan Orebody El Mochito Mine, Honduras, Central AmericaBy Robert C. Paddock
INTRODUCTION A way of producing 3,000 tpd from the El Mochito Mine was needed. Of this production, 2,000 tpd must come from the San Juan orebody. The original sub-level stoping method did not give satisfactory results due to ground instability, and the highly irregular ore/waste contacts encountered . The experience gained from the initial system helped guide research into the ground instability problem. Results from this work, combined with knowledge gained about the orebcdy configuration, defined constraints that were previously not fully appreciated. These constraints, and others, combined with objectives, were considered together to develop a new mining method. No single technique was found to be suitable, so a hybrid mining system was developed. A combination of ramping, cut and fill, and vertical crater retreat, with an option to use top heading and benching was developed. To complement the mining system, the type of equipment needed was decided upoun. Also, to support the mining system at this expanded rate of product ion, major modifications of existing infrastructure were required. THE EL MOCHITO MINE The El Mochito Mine, of Rosario Resources Corporation, has been in continuous product ion since 198. The mine began operations in April of that yeas at a rate of 100 tpd. The reserves in 198 were 100,000 tons of silver ore assayed at 1,250 grams per tonne. As of the end of 1979, the El Mochito orebodies have produced over 5.6 million tonnes of ore averaging 516 grams per tonne silver, 6.8 lead, and 7.8% zinc. Present ore reserves are about 7.9 million tonnes, averaging 138 grams per tonne silver, 4.6% lead, and 8.7% zinc, with minor quantities of copper, cadmium and gold. An expansion plan to increase mill production two fold to 2,500 tonnes per day is underway. This expansion will require the mine to produce 3,000 tpd. The mine consists of numerous orebodies, all of which have been mined to a certain extent. Of all the orebodies, the San Juan contains 8% of known reserves. This amounts to about 6.7 million tonnes. The significance of the San Juan orebody to the future life of the El Mochito Mine is obvious. If the required mine production of 3,000 tpd is to be sustained, the San Juan must be the source of the majority of that production. Due to the mineability and overall logistics concerned with the other orebodies, the San Juan must be able to reach and maintain a production rate of 2,000 tpd by 1982. GEOLOGY OF THE SAN JUAN OREBODY The El Mochito Mine is a classic example of a chimney replacement deposit in limestone. Similar deposits axe found in Mexico, at the Naica, Providencia, and Santa Eulia Mines. The El Mochito Mine is located at the south- western end of the Sula Valley on the western edge of the Honduras Depression in the Central Cordillera and Central Highlands of Honduras in a setting of Mesozoic sediments. The orebodies occur in a structural basin developed between NNE trending normal faults and apparently hinged on the south end. Topographically, the Mochito Basin lies between the uplifted Santa Barbara mountain in the west and the Palmer Ridge on the east. The San Juan orebody occurs near the intersection of the NE trending San Juan fault and the ENE trending Porvenir fault. The downward continuation of the orebody is controlled by the westward rake of these NW and N dipping structures. The discovery of the San Juan orebody is attributed to analysis of structural evidence of known ore deposits by in-company geologists. The composition of the San Juan orebody is primarily garnet skarn, with local concentrations of hedenbergite and magnetite. The economically important sulfide mineralization consists of (in decreasing abundance), sphalerite , galena, pyrrhotite , and chalcopyrite. There is some indication that a Cu-Ag mineral such as tetrahedrite may also be present. The skarns were formed by replacement of the original limestone by hydrothermal water migrating upward roughly along the intersection between the Porvenir fault system and the San Juan fault system. Textural evidence suggests that the orebody is a composite of several pulses of hydrothermal activity which would explain, in pat, the great irregularity of the contacts and the large horizontal variation in mineralogy. A general pattern of skarn types can be seen in the orebody, partially accounting for the observed lateral variation in grades. This zonation is very generalized, and one or more zones may be missing in any given locality. The orebody is almost invaxiably surrounded by a 2 cm to 25 cm zone of bustamite skaxn with low values. The border skarn is usually
Jan 1, 1981
-
Capillarity - Permeability - Evaluation of Capillary Character in Petroleum Reservoir RockBy Walter Rose, W. A. Bruce
Improved apparatus, methods, and experimental techniques for determining the capillary pressure-saturation relation are described in detail. In this connection a new multi-core procedure has been developed which simplifies the experimental work in the study of relatively homogeneous reservoirs. The basic theory concerning the Leverett capillary pressure function has been extended and has been given some practical application. Some discussion is presented to indicate the relationship of relative permeability to capillary pressure, and to provide a new description of capillary pressure phenomena by introducing the concept of the psi function. INTRODUCTION For the purposes of this paper the capillary character of a porous medium will be defined to express the basic properties of the system, which produce observed results of fluid behavior. These basic properties may be classified in the following manner, according to their relationship to: (a) The geometrical configuration of the interstitial spaces. This involves consideration of the packing of the particles, producing points of grain contact, and variations in pore size distribution. The packing itself is often modified by the secondary processes of mineralization which introduces factors of cementation, and of solution action which causes alteration of pore structure. (b) The physical and chemical nature of the interstitial surfaces. This involves consideration of the presence of interstitial clay coatings, the existence of non-uniform wetting surfaces; or, more generally, a consideration of the tendency towards variable interaction between the interstitial surfaces and the fluid phases saturating the interstitial spaces. (c) The physical and chemical properties of the fluid phases in contact with the interstitial surfaces. This involves consideration of the factors of surface, interfacial and adhesion tensions; contact angles; viscosity; density difference between immiscible fluid phases; and other fluid properties. Fine grained, granular, porous materials such as found in petroleum reser~oir rock possess characteristics which are expressible by (1) permeability, (2) porosity, and (3) the capillary pressure-saturation behavior of immiscible fluids in this medium. These three measurable macroscopic properties depend upon the microscopic properties enumerated above in a manner which defines the capillary character. Systems of capillary tubes or regularly packed spheres may be thought of as ideal and numerous references can be cited in which exact mathematical formulations are developed to show the relationships governing the static distribution and dynamic motion of fluids in their interstitial spaces. The capillary character of non-ideal porous systems such as reservoir rock also is basic in determining the behavior of fluids contained therein; although, in general, the connection is not mathematically derivable but must be approached through indirect experimental measurement. This paper gives consideration to the evaluation of petroleum reservoir rock capillary character. The methods employed may be applied to the solution of problems in other fields, and the conclusions reached should contribute to the basic capillary theory of any porous system containing fluid phases. In this paper, a modification of the core analysis method of capillary pressure is employed and it is intended to show that the capillary character of reservoir rock can be expressed in terms of experimental quantities. A very general method of interpretation correlating the capillary pressure tests with fundamental characteristics such as rock texture, surface areas, permeability, occasionally clay content and cementation is introduced. Eventually an attempt is made for establishing a method of deriving relative permeability to the wetting phase from capillary pressure data. The experimental evaluation of capillary character must be approached in a statistical manner if reservoir properties are to be inferred from data on small cores. This is implied by the heterogeneous character of most petroleum reservoirs, and suggests that considerable intelligence should be applied in core sampling. Finally, this paper supports the view that once the capillary character of a given type of reservoir rock has been established by core analysis, fluid behavior can then be inferred in other similar rock. Although no great progress has been made in establishing what variation can be tolerated without altering the basic fluid behavior properties, evidence will be presented to indicate that certain reservoir formations are sufficiently homogenous with respect to capillary character that the data obtained on one core will be useful in predicting the properties of other cores of similar origin. Tests have shown that cores under consideration can vary widely with respect to porosity and permeability and still be considered similar in capillary character. EXPERIMENTAL METHODS AND TECHNIQUES Various types of displacement cell apparatus for capillary pressure experiments have been described in the literature. Bruce and Welge; Thornton and Marshall; McCullough, Albaugh and Jones3; Hassler and Brunner; Lever-
Jan 1, 1949